Intel i350 NIC and SR-IOV in ESXi5.5

J-san

Member
Nov 27, 2014
67
42
18
40
Vancouver, BC
I was trying to enable the SR-IOV support on my Intel i350 dualport NIC built into my X10DRi motherboard but wasn't having much luck.

SR-IOV is supported on the Motherboard i350 NIC, as I checked with Supermicro.

Just a warning that as of this writing it's not supported in ESXi 5.5 up to the patch released on Dec 1, 2014.

The older method to use VMDq doesn't work on ESXi 5.5:
Code:
# esxcfg-module -s "VMDQ=0,8" igb
This was in the vmkernel.log:
Code:
PCI: driver igb claimed device 0000:04:00.0
<6>igb: : igb_validate_option: VMDQ - VMDq multiqueue queue count set to 8
<6>igb: : igb_check_options: [b]VMDq not supported on ESX-5.5[/B]
<6>igb 0000:04:00.1: Intel(R) Gigabit Ethernet Network Connection
<6>igb 0000:04:00.1: eth0: (PCIe:5.0GT/s:Width x4) <6>igb 0000:04:00.1: eth0: MAC:  00:25:90:fc:a5:f5
<6>igb 0000:04:00.1: eth0: PBA No: 070B00-000
<6>igb 0000:04:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
It's not currently supported in ESXi 5.5U2:
VMware KB: SR-IOV support status FAQ

Also, checking the SR-IOV supported cards here doesn't list the i350:
VMware Compatibility Guide: I/O Device Search

Thought I would give this as a head's up to avoid other people trying to get it working when it's not currently supported.

Although you could check the IO-SRV search above just in case it is supported later on..
 

J-san

Member
Nov 27, 2014
67
42
18
40
Vancouver, BC
I've got 2x E5-2620v3 @ 2.4Ghz processors.

I had only one before, but while benchmarking I felt the CPU usage was too high so I grabbed an X10DRi MB and an extra cpu.

Also, for some reason the Bios on the new MB is only recognizing my 2133Mhz DDR4 ram as 1866Mhz.. even when selecting the frequency manually.

EDIT: Looks like I didn't read the max RAM speed specs of the processor:
E5-2620v3 only supports up to DDR4-1600/1866Mhz

ARK | Intel Xeon Processor E5-2620 v3 (15M Cache, 2.40 GHz)
 
Last edited:

jaymemaurice

New Member
Sep 10, 2014
9
0
1
33
I too encountered this. The igb driver in Linux supports SR-IOV and the driver itself is ported to ESXi... but the SR-IOV bits are not ported :(
The parameter in the linux module is max_vfs which is not in the parameters list of the igb driver. There are allusions to it in the vmkload_mod -s output...

[root@esx1:~] vmkload_mod -s igb
vmkload_mod module information
input file: /usr/lib/vmware/vmkmod/igb
Version: Version 5.0.5.1, Build: 2494585, Interface: 9.2 Built on: Feb 5 2015
Build Type: release
License: GPL
Required name-spaces:
com.vmware.driverAPI#9.2.3.0
com.vmware.vmkapi#v2_3_0_0
Parameters:
skb_mpool_max: int
Maximum attainable private socket buffer memory pool size for the driver.
...
MDD: array of int
Malicious Driver Detection (0/1), default 1 = enabled. Only available when max_vfs is greater than 0

My issue is I wanted to decrease some network latency in for my virtualized GDM multi-seat server in my lab... SR-IOV would have worked... however so did changing the InterruptThrottleRate=1 per kb.vmware.com/kb/2018891.

YMMMV but maybe you don't actually want to enable SR-IOV... maybe like me, you just didn't want the networking to suck. SR-IOV has some pretty sad limitations in VMware like the inability to vMotion and requiring to reserve all guest memory (which you should be doing anyway if your app is latency sensitive). With gigabit NICs, you probably don't need SR-IOV.
 
Last edited: