[Diagram] Networking between VM Hosts, Shared VM Storage, Bulk Media Storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113

I wasn't sure which was the best section to place this topic so since I use VMware as the underlying hypervisor for all my workloads I figured it was a good place to start.

Above is a very low level representation of my home network from a mainly physical connection standpoint. Blue connections are 1Gb and Purple is 10Gb.

I'm looking for suggestions for how best to segregate my storage traffic (iSCSI from FreeNAS to VMware cluster and UnRAID NFS shares) to optimize performance. My main priority is my media dockers (Plex, etc.) running in my UbuntuSvr VMs. Those have pooled NFS shares (from both UnRAID servers) mounted for the dockers to access. Right now everything is on the same VLAN except for vMotion which has it's own.

I just migrated from a vSAN setup which had it's own VLAN but now that all my VMs are going to be stored on my FreeNAS box and presented via iSCSI I'm concerned about performance.
 
  • Like
Reactions: marcoi

wildchild

Active Member
Feb 4, 2014
389
57
28
Why don't you put a 10g connection to each switch, using a vlan of their own, that way you could use for storage 2 seperated paths, on their own vswitch ( queue depth ) and enable round robin.
The 1 gb's i'd also put 1 path to each switch, maybe put those on a distributed vswitch and use lacp trunks

Verstuurd vanaf mijn ZP920+ met Tapatalk
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Why don't you put a 10g connection to each switch, using a vlan of their own, that way you could use for storage 2 seperated paths, on their own vswitch ( queue depth ) and enable round robin.
The 1 gb's i'd also put 1 path to each switch, maybe put those on a distributed vswitch and use lacp trunks

Verstuurd vanaf mijn ZP920+ met Tapatalk
The X1052 is a 1Gb switch (outside of the port channel to the SG350XG) and the SG350XG is a 10Gb switch.
 

wildchild

Active Member
Feb 4, 2014
389
57
28
Ha that makes things a bit different.
So connect your 10g nic to the 10g switch :)
Since you need 3 vlans ( storagenet 1,storagenet2,vmotion), create 1 vswitch, but do make sure you hookup each nic only to 1 storagenet vlan, and setup a trunk for your vmotion vlan.
That way you could still use 2x rr iscsi paths and use 2 x 10g for multiple vmotions...
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
So connect your 10g nic to the 10g switch :)
I assume you mean 1g to 10g and as you can see from the two purple connections in the diagram, they already are.

Since you need 3 vlans ( storagenet 1,storagenet2,vmotion), create 1 vswitch, but do make sure you hookup each nic only to 1 storagenet vlan, and setup a trunk for your vmotion vlan.
That way you could still use 2x rr iscsi paths and use 2 x 10g for multiple vmotions...
I currently have 2 vDSs in vCenter. One for the dual 1Gb connections (Management) and one for the dual 10Gb connections (Storage, VM traffic). I do have Multi-NIC vMotion already setup.

So I guess my main question is what you alluded to. Should I separate the 10Gb NICs for different storage networks (iSCSI/NFS) or just separate by vmkernel and let VMware handle the load balancing.

As for FreeNAS being that it's baremetal, I'm not sure how best to configure my NICs on that end as it's my first foray into the BSD world.