SR-IOV...I guess it works after all.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kapone

Well-Known Member
May 23, 2015
1,095
642
113
This has been on my back burner for the longest time. I run pfSense virtualized, with the WAN terminated at my ICX-6610 in a VLAN (no SVI) and a single "trunk/transit" pipe to it. All local routing is handled by the 6610, only external traffic gets to pfSense.

Had been running it using vmxnet3 nics, since the last few times I tried SR-IOV, I failed miserably. Well, new year, new resolve...right? :)

So...the equipment list:

- el cheapo 1150 based motherboard from Ebay, which interestingly, came with a single port Intel 82599 NIC.


This was ~$20 when I bought two of these, the seller central_valley_computer_parts_inc on eBay does not seem to have them listed now, but they may have them.
- el cheapo E3-1220 v3, paired with 8GB ECC UDIMMs.
- ESXi 6.7 (mine is licensed, but I suspect the 60 day evaluation should still work, after that, I doubt it'll work).

The motherboard had SR-IOV for the 82599 enabled by default. Once I had ESXi up and running I enabled it in the host as well and added 8 VFs for it.

Screen Shot 2020-01-03 at 8.06.20 PM.png

Then added the VF adapters to my pfSense VM:

Screen Shot 2020-01-03 at 8.07.23 PM.png

Note: ESXi is funky when it comes to adding NICs like this. You have to DELETE the existing NICs (if any) and add new ones, otherwise they won't be added correctly.

Fired up the VM and...it crashed and burned. Bummer. I thought that was it, final straw, this is where I left it off last year and it never worked. Then I got angry and said, there's gotta be a way to make this work. So...

Hours and Hours of surfing ugly message boards in all sorts of languages...and nothing. The drivers are supposedly there, it should work...yadda yadda but nothing. Then I spotted something that seemed somewhat interesting. There was a thread on the Netgate forum that was talking about the same driver (Intel ix) but not in an SR-IOV context per se. (Intel IX only using single queue)

But it contained a lil nugget...

hw.pci.honor_msi_blacklist=0

I decided to try it anyway with little hope to be honest. Added that setting/statement to /boot/loader.conf in pfSense, added the SR-IOV Nics again and fired up the VM.

We have a winner! That was all that was needed and the ix VF driver loaded correctly, and the performance is awesome.

I thought this may be interesting to some of you. :)
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Some performance figures (DSL reports speedtest).

Screen Shot 2020-01-03 at 8.24.04 PM.png

Details about bufferbloat (which is very interesting to me, as I host some stuff)

Screen Shot 2020-01-03 at 8.24.47 PM.png

Less than 20ms while uploading, and I can improve on this, I think. I need to take another look at the BIOS settings and see if I enabled power savings etc.
 

anoother

Member
Dec 2, 2016
133
22
18
34
That board looks great for an HTPC build I have in mind.. do you know the model number?

Does the SFP+ line up with/fit into a rear panel slot, or would dremelling be required?
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
That board looks great for an HTPC build I have in mind.. do you know the model number?

Does the SFP+ line up with/fit into a rear panel slot, or would dremelling be required?
I'm not sure that board will work that well for an HTPC. The e3 1200 series CPUs only have 16 pcie lanes, and this board uses 8 of them for the 10gb nic (the board was designed for dual 10gb nics) and 8 of them for the onboard SAS 2008 ports.

The pcie slot you see is only pcie 2.0 x4 and its wired to the PCH, not even the CPU. so, adding a GPU would be a no no.

And yes, dremel would be needed.
 

zack$

Well-Known Member
Aug 16, 2018
704
323
63
This has been on my back burner for the longest time. I run pfSense virtualized, with the WAN terminated at my ICX-6610 in a VLAN (no SVI) and a single "trunk/transit" pipe to it. All local routing is handled by the 6610, only external traffic gets to pfSense.

Had been running it using vmxnet3 nics, since the last few times I tried SR-IOV, I failed miserably. Well, new year, new resolve...right? :)

So...the equipment list:

- el cheapo 1150 based motherboard from Ebay, which interestingly, came with a single port Intel 82599 NIC.


This was ~$20 when I bought two of these, the seller central_valley_computer_parts_inc on eBay does not seem to have them listed now, but they may have them.
- el cheapo E3-1220 v3, paired with 8GB ECC UDIMMs.
- ESXi 6.7 (mine is licensed, but I suspect the 60 day evaluation should still work, after that, I doubt it'll work).

The motherboard had SR-IOV for the 82599 enabled by default. Once I had ESXi up and running I enabled it in the host as well and added 8 VFs for it.

View attachment 12703

Then added the VF adapters to my pfSense VM:

View attachment 12704

Note: ESXi is funky when it comes to adding NICs like this. You have to DELETE the existing NICs (if any) and add new ones, otherwise they won't be added correctly.

Fired up the VM and...it crashed and burned. Bummer. I thought that was it, final straw, this is where I left it off last year and it never worked. Then I got angry and said, there's gotta be a way to make this work. So...

Hours and Hours of surfing ugly message boards in all sorts of languages...and nothing. The drivers are supposedly there, it should work...yadda yadda but nothing. Then I spotted something that seemed somewhat interesting. There was a thread on the Netgate forum that was talking about the same driver (Intel ix) but not in an SR-IOV context per se. (Intel IX only using single queue)

But it contained a lil nugget...

hw.pci.honor_msi_blacklist=0

I decided to try it anyway with little hope to be honest. Added that setting/statement to /boot/loader.conf in pfSense, added the SR-IOV Nics again and fired up the VM.

We have a winner! That was all that was needed and the ix VF driver loaded correctly, and the performance is awesome.

I thought this may be interesting to some of you. :)
Don't think this works on 6.5, which I am on because manual pcie mapping is gone on 6.7.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Don't think this works on 6.5, which I am on because manual pcie mapping is gone on 6.7.
I do have 6.5 lying around as well, but have not tested it with that. If I get some time, I will.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Minor tweaks to my routing setup, and running the tests again from the DMZ vs the LAN (less hops)

Capture.PNG

The bufferbloat is now really good. ~12-14ms whether uploading or downloading.

Capture1.PNG
 

jcl333

Active Member
May 28, 2011
253
74
28
Just out of curiosity, why are you trying to use SR-IOV in this situation as opposed to just pass-thru?

I too am currently using pfSense with the vmxnet3, similar but older MB (I am not sure mine supports SR-IOV, but it might) but I was planning on switching over to pass-thru, mainly for security, because then in theory no part of the hypervisor is exposed to the Internet, and also you are bypassing it so you are not incurring any performance overhead (Either way your performance seems quite good, less of a concern). I have two on-board 1Gig Intel NICs, a quad-port Intel I350 add-on NIC, and a dual 10Gig Intel x550 NIC.

I can definitely see the utility of doing this on internally-facing networks, although trunking would also be an option.

I don't see too many real-world uses of SR-IOV, so I like to ask why when I see ppl using it. Maybe just "because you can" just to try it, that would be a good enough reason of course.

I also want to try RDMA at some point but those NICs are not cheap :-(

-JCL
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Just out of curiosity, why are you trying to use SR-IOV in this situation as opposed to just pass-thru?

I too am currently using pfSense with the vmxnet3, similar but older MB (I am not sure mine supports SR-IOV, but it might) but I was planning on switching over to pass-thru, mainly for security, because then in theory no part of the hypervisor is exposed to the Internet, and also you are bypassing it so you are not incurring any performance overhead (Either way your performance seems quite good, less of a concern). I have two on-board 1Gig Intel NICs, a quad-port Intel I350 add-on NIC, and a dual 10Gig Intel x550 NIC.

I can definitely see the utility of doing this on internally-facing networks, although trunking would also be an option.

I don't see too many real-world uses of SR-IOV, so I like to ask why when I see ppl using it. Maybe just "because you can" just to try it, that would be a good enough reason of course.

I also want to try RDMA at some point but those NICs are not cheap :-(

-JCL
My WAN is terminated at my layer 3 switch in a dedicated VLAN with no SVI. Anything accessing this VLAN first needs to be connected to the "other" side of that VLAN. Just because the hypervisor is sitting there with a vSwitch connected to an uplink port to my layer-3 switch, doesn't mean it can access the WAN (or anybody on the WAN accessing the hypervisor).

There's no one listening on the hypervisor end. The only thing that's listening is the firewall VM.
 

zack$

Well-Known Member
Aug 16, 2018
704
323
63
I do have 6.5 lying around as well, but have not tested it with that. If I get some time, I will.
I haven't been able to get this working on 6.5 U2 at all. On the other hand, I've had luck on 6.7 and 7.0.
 

mimino

Active Member
Nov 2, 2018
189
70
28
I've been struggling with this as well, had two attempts and did not get it working with FreeBSD guest on proxmox host. Have not tried this trick yet, hopefully it works. I have a SolarFlare SFN7022F card in SuperMicro X10SL7-F board with E3-1271 V3.

I also have other issues which I can't find a solution for. First, I'm not able to allocate more than 8 VF's per port. I think it's got something to do with board/BIOS not having implemented ARI (Alternative Routing‐ID Interpretation) properly. It's unlikely that anything can be done to fix it.

I'm also getting this error even with 8 VF's and everything kind of working:
[ 0.845286] DMAR: DRHD: handling fault status reg 3
[ 0.845354] DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr d0304000 [fault reason 05] PTE Write access is not set
And lastly, I can't get SR-IOV to work at all on PCIe 3.0 PCU slot. All VF's end up in the same IOMMU group for some reason. PCH 2.0 PCIe works without issues.

Overall this platform turned out to be pretty disappointing from this prospective. It might be too old, SM didn't care or might be a processor/chipset limitation.