thoughts... ESXi 5.5u2 to Synology NFS Share... should I LAG ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

spyrule

Active Member
Hey guys,

This is ALL in a home lab...
Current setup
1 x ESXi Host ( 4 x 1Gb Eth ports, however 1 is reserved for a passthrough backup of my sophos gateway VM that I will be configuring within the next week).
1 x Xpenology NAS DSM 5.0-4528u2 with 3 x 1Gb Eth ports.
1 x Linksys SRW2048 Layer 3 switch
1 x Cisco SG200-8 L2 Switch
1 x Tomato Router/Gateway (soon to be replaced with a Sophos UTM VM, and converted to a dual AP (guest network(vlan'd direct internet, no intranet access), and home use (full access).

Currently ALL servers are connected to the SRW2048, which is then wired to my Cisco SG200 Switch, which is where I'll have my new ESXi host plugged into in the very near future).

Coming in the next week or so will be setting up a second smaller ESXi host for running my sophos gateway (and possibly a secondary Active Directory/DNS/DHCP server). I know I could in theory vlan my modem traffic to my sophos VM, but I prefer the idea of a dedicated passed through port for the inbound portion of my Sophos VM).

I've been playing (read: learning) with my ESXi networking configuration, and initially I was planning on running ESXi 5.5u2 with a Virtual Distributed Switch (matched to 5.5 mode), using LAG (LACP) to combine all 3 x 1Gb ports, and then vlan all traffic including iSCSI/NFS through essentially 1 fat pipe.

However, I now realize I cannot combine iSCSI with LAG (and it's not recommended/supported).
So I reverted to 2 x 1Gb combined into the LAG, and dedicated 1 x Gb port to iSCSI.

However, this doesn't seem like the best idea to me.

So I've decided to simplify my life and go with a single NFS share.

From what I understand there is little speed/reliability difference between iSCSI and NFS in a single/few host, single storage scenario. So why make my life harder by using iSCSI it seems...

Is there any benefit of using LAG for an NFS share vs simply using "route based on physical NIC load" in a simple multi NIC group ?

Can someone enlighten me as to what would be the best practice, most optimal method of utilizing my 3 port ESXi setup?

without LAG is there a benefit to vDS besides "central" config setup, that I cannot achieve with a standard vSwitch ?

My long term goals for this are:
- Allow for redundant/failover protection for my sophos gateway should a ESXi host require a reboot.
- Allow for optimal use of my Synology NAS for decent/good IO for VMs
- Allow for VLANs to help segregate different traffic types (and possibly seperating "home" actual use, from the home "lab" network traffic).

Any/All suggestions are welcome/appreciated.

I can post a Visio diagram if that helps anyone... just ask.
 
Last edited:

spyrule

Active Member
So here is a network diagram of what I'm thinking will be close to my final setup (within the next week or so when I get the hardware for my second ESXi box (sadly it won't match my 1st ESXi box, but it's footprint is tiny, but powerful).



I'm really interested on what people think of this layout, config...
 
  • Like
Reactions: Marsh

Mike

Member
May 29, 2012
482
16
18
EU
I don't know a lot about the Vmware product but the aggregation probably does no good in the NFS setup you currently have in terms of throughput. Iscsi multipathing is probably the way to handle this, or multiple NFS shares.
On regular boxes you could do back to back round robin balancing, even with NFS, but this doesn't scale as your storage device would need a lot of interfaces.
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
This question is not related to your homelab, I really like your network diagram.
What tools did you used to produced the diagram?
thanks