ESXi 6.0 teamed uplink for vSwitch without vCenter

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

weust

Active Member
Aug 15, 2014
353
44
28
44
I am using ESXi 6.0 with the Infrastructure Client and trying to determine whether I can team two NIC's for the uplink on a vSwitch.

With free Hyper-V 2012 R2 it was very easy setting up a LACP team, and with configuring the physical switch, I got a nice 2Gbit/s NIC.
Under ESXi I can add more then one NIC to the vSwitch, and with the ports on the physical switch still LACP enabled, I have no network problems. But I believe it's only load balancing, nothing else.

The documentation I dug up so far all speak about using vSphere for LACP, or creating EtherChannels, which is mainly Cisco it seems. Both of which I do not have.
I can set up static port groups on the physical switch, but I am uncertain how that works.


Before people start asking why I need this, I don't. I just like to create a two NIC uplink for the vSwitch for redundancy. That is the main thing. And if I can set it up for a 2Gbit/s team that would be cool.
Setting it up as a failover setup is easy of course :)
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
It is my understanding that you need to use a distributed switch for LACP on ESXi, and for that you need vCenter (and thus a paid license).
But someone please prove me wrong, it is one of the reasons why I am planning to change my homeserver from ESXi to oVirt.
 

weust

Active Member
Aug 15, 2014
353
44
28
44
I forgot to mention the distributed switch part. Thought of it while writing the topic, though.

That leaves me with EtherChannel. Which I sometimes see being referred as static port groups (or something like that) as well. But if that is correct, I don't know 100% sure as well.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Just leave it with the two pNICs attached to the same vSwitch - its good enough for 99% of use-cases, and is far simpler to setup and with way fewer restrictions/requirements from the network switch side of things. Such a setup will provide both redundancy (all traffic will flow over the remaining link if one goes down), and load-balancing (default is by virtual-switch port ID, so 1/2 of your VMs will end up on one pNIC and 1/2 of the VMs on the other pNIC). In the vSwitch properties just make sure that all of the pNICs are listed as 'active adapters' and you're good to go.

The only you can't do with that setup is give more than 1gbps of bandwidth to a single VM - but then LACP can only do that in certain situations too so its not a huge disadvantage.

The advantages of not using LACP are that you have NO requirements of anything from the network switch(s). It works with unmanaged/dumb switches (it would work with a hub if you still have one around), it works when your pNICs are wired to physically different switches for switch-level redundancy, it works when you mix'n'match all different kinds/speeds of NICs and switches together. It pretty much always just works and rarely breaks or needs any troubleshooting.

You can also still do more advanced network things in the VM guests if required with this configuration as well. Say you wanted to do iSCSI with MPIO (either to a VM with software initiator, or in vSphere with VMware's software initiator), or SMB multi-channel to a VM. You just have to create two port-groups on the vSwitch that are almost identical (configured for the same VLAN, etc.), except in the port-group settings configure one of them with 1 pNIC as primary and the other as unused, and in the other port-group swap it around so the other pNIC is active there. Then you can give a VM two vNICs with one connected to each port-group, and now that VM has two separate 1gbps paths out to the physical network.
 
  • Like
Reactions: NetWise and Danic

weust

Active Member
Aug 15, 2014
353
44
28
44
Thanks for your lengthy post :)

I know all the advantages and disadvantages, and as I mentioned before I do need the advantages.
Just having it setup is a "cool to have" at home.
The more then 1Gbit/s is even pointless in my case, as my NAS only has one Gbit connection.

I will settle down for the default load balancing settings for now.