Hardware: 4 port 2.5G on-board NICS (Intel i225V) - fanless box with an N5015 CPU, 32 Gigs RAM, 1TB SSD Storage.
Firewall: OPNSense (virtualized)
Desired Config: Three ports are assigned to a single bridge. Each virtual NIC is connected to that bridge, including the management vNet. The remaining physical NIC is dedicated to OPNSense via PCI passthru. Then a single virtual NIC is given to OPNSense so that OPNSense thinks it only has two NICs, one for the WAN and one for the LAN (the LAN now existing on a three-port bridge that OPNSense knows nothing about).
ESX
I've wrestled with this configuration off and on for some time. I first tried it with ESXi 8 but without paying for vSphere, this config is a NO GO. However, I find it interesting that they only make this config possible if you purchase vSphere, they even describe it in their documentation and point out that it requires a purchased license before it can be done.
What I have been doing with ESX to have some functionality, is I gave OPNSense three of the four NICs via PCI passthru, and I bridged two of those NICs in OPNSense (you can't bridge virtual NICs in OPNSense because bridging can only happen when the OS has access to the hardware directly via the kernel). So the final NIC count for OPNSense was FOUR NICs - THREE via passthru and one virtual NIC. So then OPNSense has essentially three isolated NICs to work with (WAN, Virtual, Bridge) where my main private LAN is on the bridge. The virtual nic is used only for managing ESX when I need to work with it while the OPNSense virtual machine is shut down (that need arises from time to time). The game console stays on WiFi since I can't dedicate a port to it ... yeah, I could share it with the port that also has the virtual network on it, swapping out cables when I need to use that port for ESX management, but this is not the optimal config and not the config I want to settle on. This would also take the game console off of my main LAN which is again, not desirable.
With ESXi, if you try to add NICs to NIC Teaming, it only provides settings that are valuable in a fail-over scenario. It requires a "Load Balancing" setting which tells it how to ROUTE packets from the physical NIC to the upstream connection, where I am assuming it only allows traffic from the virtual NICs to traverse one NIC and if that NIC fails or goes offline, it then fails over based on that setting. AND, if I add three NICs into the team and try setting them all to Enabled, the ESX box becomes 100% unreachable and the only way I can recover it is to wipe the entire config at the console. Wiping the network and resetting it to defaults won't work, I have to wipe the entire config, then re-import the virtual machines manually because they just vanish (though their files are all still there).
PROXMOX
I recently gave it a go with Proxmox, and though it was possible, it did it only when I created the bridge at the Debian level first. So then Proxmox is just handed a virtual NIC from the OS, making it none the wiser that it's virtualizing a virtualized NIC (not that this should matter at all). However, the performance of the virtualized firewall was horrid, giving me a thruput of less than 100 megabits on my gigabit Internet service (I got full gigabit Internet speeds with the ESX setup - same hardware, same cables, etc.). I'm not sure whether the issue with the performance is a Proxmox issue or a Debian / NIC driver issue - I lean on the OS as being the most likely cause of the bottleneck, and so I am looking into that at the moment, but my frustration levels are maxing out with this setup.
HYPER-V
I was reading up on Microsoft Hyper-V today, but the language they use when describing "teaming" NICs into a virtual switch - is eerily similar to what I've seen with ESXi where it seems to be a fail-over only scenario - though I haven't dug into it much deeper than that, especially after I read that their implementation of SR-IOV only works with Windows virtual machines ... this, of course, makes sense since Microsoft's universe doesn't know that Linux exists (with a few rogue exceptions within their dev dungeons).
What I am hoping to find from this post, is someone who is aware that this config can be done in some bare metal hypervisor that exists somewhere on the Internet that I can download and test.
Thank you,
Mike
Firewall: OPNSense (virtualized)
Desired Config: Three ports are assigned to a single bridge. Each virtual NIC is connected to that bridge, including the management vNet. The remaining physical NIC is dedicated to OPNSense via PCI passthru. Then a single virtual NIC is given to OPNSense so that OPNSense thinks it only has two NICs, one for the WAN and one for the LAN (the LAN now existing on a three-port bridge that OPNSense knows nothing about).
ESX
I've wrestled with this configuration off and on for some time. I first tried it with ESXi 8 but without paying for vSphere, this config is a NO GO. However, I find it interesting that they only make this config possible if you purchase vSphere, they even describe it in their documentation and point out that it requires a purchased license before it can be done.
What I have been doing with ESX to have some functionality, is I gave OPNSense three of the four NICs via PCI passthru, and I bridged two of those NICs in OPNSense (you can't bridge virtual NICs in OPNSense because bridging can only happen when the OS has access to the hardware directly via the kernel). So the final NIC count for OPNSense was FOUR NICs - THREE via passthru and one virtual NIC. So then OPNSense has essentially three isolated NICs to work with (WAN, Virtual, Bridge) where my main private LAN is on the bridge. The virtual nic is used only for managing ESX when I need to work with it while the OPNSense virtual machine is shut down (that need arises from time to time). The game console stays on WiFi since I can't dedicate a port to it ... yeah, I could share it with the port that also has the virtual network on it, swapping out cables when I need to use that port for ESX management, but this is not the optimal config and not the config I want to settle on. This would also take the game console off of my main LAN which is again, not desirable.
With ESXi, if you try to add NICs to NIC Teaming, it only provides settings that are valuable in a fail-over scenario. It requires a "Load Balancing" setting which tells it how to ROUTE packets from the physical NIC to the upstream connection, where I am assuming it only allows traffic from the virtual NICs to traverse one NIC and if that NIC fails or goes offline, it then fails over based on that setting. AND, if I add three NICs into the team and try setting them all to Enabled, the ESX box becomes 100% unreachable and the only way I can recover it is to wipe the entire config at the console. Wiping the network and resetting it to defaults won't work, I have to wipe the entire config, then re-import the virtual machines manually because they just vanish (though their files are all still there).
PROXMOX
I recently gave it a go with Proxmox, and though it was possible, it did it only when I created the bridge at the Debian level first. So then Proxmox is just handed a virtual NIC from the OS, making it none the wiser that it's virtualizing a virtualized NIC (not that this should matter at all). However, the performance of the virtualized firewall was horrid, giving me a thruput of less than 100 megabits on my gigabit Internet service (I got full gigabit Internet speeds with the ESX setup - same hardware, same cables, etc.). I'm not sure whether the issue with the performance is a Proxmox issue or a Debian / NIC driver issue - I lean on the OS as being the most likely cause of the bottleneck, and so I am looking into that at the moment, but my frustration levels are maxing out with this setup.
HYPER-V
I was reading up on Microsoft Hyper-V today, but the language they use when describing "teaming" NICs into a virtual switch - is eerily similar to what I've seen with ESXi where it seems to be a fail-over only scenario - though I haven't dug into it much deeper than that, especially after I read that their implementation of SR-IOV only works with Windows virtual machines ... this, of course, makes sense since Microsoft's universe doesn't know that Linux exists (with a few rogue exceptions within their dev dungeons).
What I am hoping to find from this post, is someone who is aware that this config can be done in some bare metal hypervisor that exists somewhere on the Internet that I can download and test.
Thank you,
Mike