Linux NIC Bridging - Is there a downside ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Allan74

Member
May 15, 2019
132
13
18
This has far more to do with BSD than it does with Linux unfortunately (Proxmox).

Q: Are the use of NIC bridges and virtIO within a Hypervisor an acceptable way to achieve desired line speeds in/from a VM that doesn't natively support the NIC/hardware ?
(Think RTL 2.5Gbe WAN/LAN & PFsense).

I know very little about BSD, never mind Linux, therefore don't have the skills to play around with compiling anything.
I simply need to be able to accept a 2.5Gbe input for an upcoming Internet upgrade and am taking a very simple approach.
I am essentially using a simple pair of bridges in my Virtual Server to accomplish what I need for input/output.

Onboard Intel 1Gbe NIC bridged with a (BSD unsupported) RTL 2.5Gbe as INPUT. (2nd onboard is spare/dedicated ipmi)
Quad port Intel 1Gbe bridged with a Dual Port 10Gbe for OUTPUT.

It all works, but other than the possible security of a dedicated HW passthrough INPUT NIC for the firewall, what else am I giving up ?

This server will only spin up a few VMs and is just for home use. No Datacenter security really required.

..but, any suggestions would be greatly appreciated.

thanks,
Allan
 

Blue)(Fusion

Active Member
Mar 1, 2017
150
56
28
Chicago
If I understand correctly, you want to bond/LAG two virtio driver gigabit vNICs in pf/OPNSense inside of Proxmox to make better use of 2.5G physical links?

In an LACP bond, each stream is limited to one interface (in this case virtual interface) but multiple streams can span multiple interfaces.

With that said, do you really need 2.5G links for your edge router?

As someone who has mission hacked and jerry-rigged configurations for years to maximize theoretically performance, the least hacky way is the best way 11/10 times.
 
  • Like
Reactions: RobstarUSA

LodeRunner

Active Member
Apr 27, 2019
546
228
43
So from my brief reading, Proxmox VirtIO appears to be equivalent to ESXi's VMXNET3 or Hyper-V's network driver which show up as 10Gbit rather than the E1000 emulated driver. So the pfSense guest should only see the VirtIO driver not the RTL driver. I run Mellanox cards in my Hyper-V host, but the guests only see the HV net driver and are utterly unaware of the underlying hardware (including pfSense).

Do Proxmox guests only see gigabit or can you configure the speed?

Why is bridging a solution here? Is this a Proxmox specific thing?
 

Blue)(Fusion

Active Member
Mar 1, 2017
150
56
28
Chicago
So from my brief reading, Proxmox VirtIO appears to be equivalent to ESXi's VMXNET3 or Hyper-V's network driver which show up as 10Gbit rather than the E1000 emulated driver. So the pfSense guest should only see the VirtIO driver not the RTL driver. I run Mellanox cards in my Hyper-V host, but the guests only see the HV net driver and are utterly unaware of the underlying hardware (including pfSense).

Do Proxmox guests only see gigabit or can you configure the speed?

Why is bridging a solution here? Is this a Proxmox specific thing?
Good point.

My VirtIO vNICs shows up at 10Gig in OPNSense and pfSense under Proxmox.
 

Allan74

Member
May 15, 2019
132
13
18
With that said, do you really need 2.5G links for your edge router?
Both to and from If I want to take full advantage of the Internet package from my ISP, I need greater than 1Gbe.

Do Proxmox guests only see gigabit or can you configure the speed?
Why is bridging a solution here? Is this a Proxmox specific thing?
For OPNsense or PFsense, neither recognizes my RTL 2.5Gbe NICs and with everything hinging on a 2.5Gbe input for the package that I will be upgrading to from my ISP, I just found that setting up a bridge seems to work. Even my Proxmox management NIC (Intel igb driver) is listed as a bridge by itself, so that is where I got the idea from.

Each guest within Proxmox shows up as 10Gbe and the overall speed to/from the box is whatever NIC in the bridge I plug a cable into.

Rather than complicate things, I simply decided to run everything bridged, well, in 2 separate bridges. This allows me to use an unsupported 2.5Gbe NIC for an input and multiple speeds of outputs to give me a few extra ports (a combination of single 1Gbe, 2.5Gbe and 10Gbe ports out).

As an aside, I am NOT running any port redundancy or NIC teaming - YET. Once I get everything figured out, I will do a fresh installation of everything and ensure the main input and output have at least 1 set of redundant pairs.

I just wanted to make sure that the direction that I was heading wasn't a dead end and at a minimum, somewhat acceptable.

thanks,
Allan
 

LodeRunner

Active Member
Apr 27, 2019
546
228
43
Bridging must be Proxmox's name for creating a virtual network because normally when I hear network bridging I think we're tying two physical adapters together to make a software switch (a network switch is a multi-port MAC bridge at it's most basic layer 2 implementation).

In ESXi and Hyper-V parlance, you create a virtual network; then guests have vNICs that you connect to that network. In my cluster at home, I have a single virtual network on a trunked interface and I just set the VLAN tags on the guest interfaces to pull the right network; but I was being lazy and was short on 10G ports.

So yes, it looks like you would create a bridge adapter using the RTL 2.5 and a separate bridge adapter using your 10G link, then add two VirtIO interfaces, one for each bridge, to your pfSense guest, because pfSense will see the VirtIO driver, not the actual physical adapter. Just be sure to disable offload as BSD has a known crash condition with many network drivers, including VirtIO if hardware offload is enabled.
 
  • Like
Reactions: Allan74

Blue)(Fusion

Active Member
Mar 1, 2017
150
56
28
Chicago
If you don't mind, show your /etc/network/interfaces with anything identifiable (GUA IPv6) redacted. I can;t quite grasp what you're explaining to have done.
 

Allan74

Member
May 15, 2019
132
13
18
It's a pretty simple RED Bridge (vmbr1), GREEN Bridge (vmbr2) setup in Proxmox, with the YELLOW original management interface (as a Bridge by default by Proxmox) left dormant post setup. (I color coded things to keep it straight in my head...LMAO. I am a child)
While the colors should speak for themselves, RED is made up of a BSD natively supported 1Gbe NIC and a 2.5Gbe RTL that is not, but is required for my setup to accommodate faster than 1Gbit Internet. The Green bridge is simply a 4 port Intel 1Gbe card and a dual port 10Gbe card.

Code:
auto vmbr0
iface vmbr0 inet static
        address xxx.xxx.xxx.xxx----ORIGINAL MANAGEMENT IP
        bridge-ports enp9s0f1
        bridge-stp off
        bridge-fd 0
#YELLOW

auto vmbr1
iface vmbr1 inet manual
        bridge-ports enp7s0 enp8s0
        bridge-stp off
        bridge-fd 0
#RED Bridge

auto vmbr2
iface vmbr2 inet static
        address 10.9.8.7/24
        gateway 10.9.8.1
        bridge-ports enp2s0 enp2s0d1 enp5s0f0 enp5s0f1 enp6s0f0 enp6s0f1
        bridge-stp off
        bridge-fd 0
#GREEN Bridge
Post setup along the GREEN bridge, my Proxmox management ip is 10.9.8.7, opnsense/gateway 10.9.8.1 etc etc. Red goes to ISP via DHCP.

To further demystify things, I have done it this way because I don't have an Intel 2.5Gbe card to pass-through to OPNsense. I can pass the RTL card directly, but BSD doesn't see it, hence VirtIO/Bridge. The secondary 1Gbe NIC included in the RED bridge was added as a physical backup and I likely will not bond them, as the ISP hardware will not support it, so it's a physical cable swap if/when needed and the RTL has problems.
 
Last edited: