SR-IOV and NIC Passthrough

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

antsh

New Member
Sep 14, 2017
26
6
3
I have SR-IOV up and running on my i350 in Proxmox 7.3. I can see the physical ports and 4 VFs per port, and am able to assign them to guests. The weird (to me?) thing that is happening is if I pass through just one VF to a guest, say a windows VM, I get 5 NICs showing up in Windows: the physical port and all 4 VFs. Is that normal? I would assume if I just pass through a single VF device, only that NIC should show up in the VM?
 

MrCalvin

IT consultant, Denmark
Aug 22, 2016
87
15
8
51
Denmark
www.wit.dk
I know it's a little off-topic, but how big it the gain of NIC SR-IOV on KVM, is it really worth it, talking "normal" network traffic. I bridge my onboard i210/i219s to my VMs and I see very good performance, couldn't really ask for more.
Talking iSCSI or similar, that might be another story where every ms-latency count.
These SR-IOV NICs often use extra power too.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
I have SR-IOV up and running on my i350 in Proxmox 7.3. I can see the physical ports and 4 VFs per port, and am able to assign them to guests. The weird (to me?) thing that is happening is if I pass through just one VF to a guest, say a windows VM, I get 5 NICs showing up in Windows: the physical port and all 4 VFs. Is that normal? I would assume if I just pass through a single VF device, only that NIC should show up in the VM?
Check IOMMU groups (and see if there's any other relevant BIOS settings). If they're all in the same IOMMU group, then you can't pass them through individually. Try this script.

I know it's a little off-topic, but how big it the gain of NIC SR-IOV on KVM, is it really worth it, talking "normal" network traffic. I bridge my onboard i210/i219s to my VMs and I see very good performance, couldn't really ask for more.
Talking iSCSI or similar, that might be another story where every ms-latency count.
These SR-IOV NICs often use extra power too.
Very big "it depends". There's two different aspects of "performance" - raw throughput, and how much CPU overhead is consumed in the process. SR-IOV is almost always going to reduce CPU overhead, because the switching is offloaded to the hardware entirely. However, it also shifts the bottleneck from CPU, to PCIe bandwidth and the NIC's hardware. Generally that's a non-issue for communication between an VM and elsewhere on your network, because the worst case scenario is that you're limited by the link speed. However, for VM-to-VM communication, SR-IOV can cap out at lower speeds, because you bottleneck on PCIe bandwidth instead (whereas software bridging does not). e.g. a 10GbE single-port NIC might only have 16gbits of PCIe bandwidth, but that means that your VM-to-VM speed (or VM-to-host) is effectively limited to 16gbit half-duplex. It's usually not a big issue, but can be noticeable when running multiple promisc VFs, since that means traffic gets effectively multiplied.