This has been on my back burner for the longest time. I run pfSense virtualized, with the WAN terminated at my ICX-6610 in a VLAN (no SVI) and a single "trunk/transit" pipe to it. All local routing is handled by the 6610, only external traffic gets to pfSense.
Had been running it using vmxnet3 nics, since the last few times I tried SR-IOV, I failed miserably. Well, new year, new resolve...right?
So...the equipment list:
- el cheapo 1150 based motherboard from Ebay, which interestingly, came with a single port Intel 82599 NIC.
This was ~$20 when I bought two of these, the seller central_valley_computer_parts_inc on eBay does not seem to have them listed now, but they may have them.
- el cheapo E3-1220 v3, paired with 8GB ECC UDIMMs.
- ESXi 6.7 (mine is licensed, but I suspect the 60 day evaluation should still work, after that, I doubt it'll work).
The motherboard had SR-IOV for the 82599 enabled by default. Once I had ESXi up and running I enabled it in the host as well and added 8 VFs for it.
Then added the VF adapters to my pfSense VM:
Note: ESXi is funky when it comes to adding NICs like this. You have to DELETE the existing NICs (if any) and add new ones, otherwise they won't be added correctly.
Fired up the VM and...it crashed and burned. Bummer. I thought that was it, final straw, this is where I left it off last year and it never worked. Then I got angry and said, there's gotta be a way to make this work. So...
Hours and Hours of surfing ugly message boards in all sorts of languages...and nothing. The drivers are supposedly there, it should work...yadda yadda but nothing. Then I spotted something that seemed somewhat interesting. There was a thread on the Netgate forum that was talking about the same driver (Intel ix) but not in an SR-IOV context per se. (Intel IX only using single queue)
But it contained a lil nugget...
hw.pci.honor_msi_blacklist=0
I decided to try it anyway with little hope to be honest. Added that setting/statement to /boot/loader.conf in pfSense, added the SR-IOV Nics again and fired up the VM.
We have a winner! That was all that was needed and the ix VF driver loaded correctly, and the performance is awesome.
I thought this may be interesting to some of you.
Had been running it using vmxnet3 nics, since the last few times I tried SR-IOV, I failed miserably. Well, new year, new resolve...right?
So...the equipment list:
- el cheapo 1150 based motherboard from Ebay, which interestingly, came with a single port Intel 82599 NIC.
This was ~$20 when I bought two of these, the seller central_valley_computer_parts_inc on eBay does not seem to have them listed now, but they may have them.
- el cheapo E3-1220 v3, paired with 8GB ECC UDIMMs.
- ESXi 6.7 (mine is licensed, but I suspect the 60 day evaluation should still work, after that, I doubt it'll work).
The motherboard had SR-IOV for the 82599 enabled by default. Once I had ESXi up and running I enabled it in the host as well and added 8 VFs for it.
Then added the VF adapters to my pfSense VM:
Note: ESXi is funky when it comes to adding NICs like this. You have to DELETE the existing NICs (if any) and add new ones, otherwise they won't be added correctly.
Fired up the VM and...it crashed and burned. Bummer. I thought that was it, final straw, this is where I left it off last year and it never worked. Then I got angry and said, there's gotta be a way to make this work. So...
Hours and Hours of surfing ugly message boards in all sorts of languages...and nothing. The drivers are supposedly there, it should work...yadda yadda but nothing. Then I spotted something that seemed somewhat interesting. There was a thread on the Netgate forum that was talking about the same driver (Intel ix) but not in an SR-IOV context per se. (Intel IX only using single queue)
But it contained a lil nugget...
hw.pci.honor_msi_blacklist=0
I decided to try it anyway with little hope to be honest. Added that setting/statement to /boot/loader.conf in pfSense, added the SR-IOV Nics again and fired up the VM.
We have a winner! That was all that was needed and the ix VF driver loaded correctly, and the performance is awesome.
I thought this may be interesting to some of you.