Bridging Ethernet Ports in Virtualized Firewall on ESXI

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TXAG26

Active Member
Aug 2, 2016
397
120
43
Bridging Ethernet Ports in Virtualized Firewall on ESXI

I have a switch with a couple of 10GbE ports, but all are used.
I need 1 more 10GbE switch port.
Both my main TrueNAS box and the TrueNAS backup box are virtualized on vSphere ESXI and both have dual port Intel X550 10GbE adapters.
The main TrueNAS box is physically cabled to the network switch.

I want to install a virtual firewall, such as OPNsense or pfsense on the main TrueNAS box (in ESXI) and use the virtual firewall to bridge both physical ethernet ports on the X550 so that I can directly cable the backup TrueNAS box to the primary TrueNAS box and transfer data between both boxes and the physical network switch.
I would need both TrueNAS boxes to be able to communicate back to the physical switch. The backup TrueNAS box network traffic would traverse over the bridge on the primary TrueNAS box and then flow to the physical switch.

I've tried setting this up, but haven't had much luck. I've tried keeping it simple by keeping the network flat (192.168.0.x) and no VLANs, but lose connection as soon as I try to bring up the bridge.

Has anyone done something similar or have any suggestions on making this work? I'm also unclear as to how the vSwitches in ESXI should be set up. I've tried creating two separate vSwitches and putting them into promiscuity mode, but that doesn't seem to help either.
 

DavidWJohnston

Active Member
Sep 30, 2020
242
188
43
IIUC what you want to do is this: [10G Switch]--[TrueNAS Primary]--[TrueNAS Backup]--[Spare] - So daisy-chain the backup thru the primary to a physical switch port, and the backup box will have an empty port.

Note that using software bridging and promisc mode like this will cause higher CPU load. With 2x 10G it might be a lot. The packet load might also be heavy and cause performance issues in the OS. Also, obviously if the TrueNAS primary is down, the backup will be down also.

If you still want to do it, the physical ports involved need to be bound to separate vSwitches. Adding 2 physical NICs to the same vSwitch is for failover, not for bridging.

Promiscuous mode, forged transmits, and MAC changes need to be enabled in ESXi at the port groups or vSwitches. Using dedicated port groups and setting promisc there is better, to minimize the CPU load.

If you want the bridge to pass VLANs, you can set the port groups in ESXi to VLAN 4095, this will tag all other VLANs. In OPNsense, you may need to create the VLANs you need, to prevent it from dropping tagged traffic that doesn't match a configured VLAN (not sure about this though).
 
  • Like
Reactions: TXAG26

TXAG26

Active Member
Aug 2, 2016
397
120
43
I actually got this working! Thank you for the directions. The bridge will pass 1.4gbps of traffic on a 2.4ghz E5-2680v4 cpu that speed boosted to 2.8ghz on each core. From what I could tell, a transfer across the bridge used one core per NIC. In total, ESXI was showing the OPNsense machine using 5.5ghz of processing power, even though I had 8 cores (19.2ghz) allocated to that VM. This tells me this is CPU ghz limited. I am going to test this same setup with a E5-1650 v4 (6-core, 3.6 GHz base / 4.0 GHz boost) and see how that performs. With these particular machines and the document type files that it backs up, it rarely exceeds about 3-4gbps transfer speeds, so I’d be happy with anything in the 2-3gbps range with this setup if it saves me from having to buy another 10GbE switch.
 

DavidWJohnston

Active Member
Sep 30, 2020
242
188
43
I'm glad you got it working. Using jumbo frames might improve the speed.

It also sounds like you might benefit from RSS (Receive-Side Scaling) - This creates multiple queues for receiving packets, and distributes the load across cores.

If you're using VMXNET3 NICs and have more than one vCPU assigned to your OPNsense VM, I would think RSS should be enabled automatically. This thread on the OPNsense forum might help you determine if RSS is enabled: [Tutorial/Call for Testing] Enabling Receive Side Scaling on OPNsense

In my homelab, I run pfSense, and I found the following NIC offload settings provided the highest performance, YMMV.

1685239212111.png
 
  • Like
Reactions: TXAG26

TXAG26

Active Member
Aug 2, 2016
397
120
43
Good deal, thank you, I'll take a look at those tweaking suggestions as well.
I did add the OPNsense system tunables found in Step 6 here:

I may try playing around with those two tunables as well as I've seen mixed info as to whether they're really needed with my setup or not. That also might juice the performance up a bit.
 

TXAG26

Active Member
Aug 2, 2016
397
120
43
I am using the VMXNET3 drivers on ESXI 8 w/OPNsense as I've had the best luck performance-wise with them.