X9DRD-7LNF4 onboard LSI 2308 SR-IOV not capable?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
Hi folks,

Still tinkering with my new build, with my X9DRD-7LNF4 board and onboard LSI2308 controll, it doesn't seem to allow me to configure SR-IOV in ESXi 6.7. I've done the install and enabled all relevant options I can find in the BIOS but VMWare reports the device as not being SR-IOV capable.
The only thing I can think of is the IT BIOS I flashed for full device passthrough to the NAS VM. Ideally I'd like to have the ports split between 2 VMs running drives off expanders.

Any ideas?
 

rune-san

Member
Feb 7, 2014
81
18
8
It should support SR IOV in IT Mode without any issues. Your second comment about splitting ports is a solid "no". The Controller is what you're passing through, and all the SAS Ports split from there. You're not going to be able to split the ports between different VMs. Did you try vSphere 6.5 or similar to rule out any 6.7 foolery?
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
Hmmm I'll check out 6.5 if possible. Seems there's very little documentation/info on people using SRIOV when it comes to storage, I may have been mistaken on what it does? My understanding was that you're able to "split" the PCIe device resources amongst several VMs? The onboard quad LAN Intel i350 is reported SRIOV capable in ESXi 6.7 and presents as 4 distinct devices to passthrough
 

rune-san

Member
Feb 7, 2014
81
18
8
My apologies, I had this confused with you attempting to do IOMMU, to pass through a device to a VM. You are *only* doing SR-IOV, where you want ESXi to control the device, and break it up into virtual devices for other VMs to use.

Technically this is possible, but not with ESXi. As far as I know VMware has only ever qualified Network Adapters (and Infiniband Adapters in Ethernet Mode) to work with this virtual function breakup. There is no SAS Controller (or anything else really) on their HCL for SR-IOV.
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
Thanks @zir_blazer
The idea (in my head) was to have each of the 2 SAS connectors split to a separate VM, I have expanders and that would allow each VM to have many drives attached to them. As you and @rune-san mentioned and in that thread, this seems more trouble than it's worth but it was a "cool let me try this out" idea
 

vanfawx

Active Member
Jan 4, 2015
365
67
28
45
Vancouver, Canada
As far as I know, SR-IOV on SAS would work like a zoned fabric. You would have to map drives to each virtual function. Though how this would be accomplished, I have no idea. With SAS Switches, they provide a utility that lets you configure zoning based on the WWPN/WWNN of the HBA.

If you make any progress, please share! :)
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
@vanfawx that was what I sort of thought as well, SAS basically works on a network/address based topology and has been very kind with multipath and expander/backplanes vs. SATA before it. I've stop messing with this as it's my production home server and can't have it down all the time :)
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
@vanfawx that was what I sort of thought as well, SAS basically works on a network/address based topology and has been very kind with multipath and expander/backplanes vs. SATA before it. I've stop messing with this as it's my production home server and can't have it down all the time :)
This is one of those things on my list of "gotta try it", but never gotten around to. I'm sure you'e seen this VMware Knowledge Base, but on the off chance that you haven't...

I know you said, you enabled all the right BIOS options etc...BUT..esxi disables "Virtual Functions" by default..so..

  1. At the host console, or via SSH as root, run the command:

    esxcli system module parameters set -m NIC_Driver_Module -p "max_vfs=n"

    Where:
    • NIC_Driver_Module is the module name of the NIC which is SR-IOV capable (for example, ixgbe)
    • n is the number of virtual functions (VFs) provided by the NIC (for example, 8)

    For example, to configure for an Intel X540 10 GB Ethernet Adapter, run the command:

    esxcli system module parameters set -m ixgbe -p "max_vfs=8"

    If you have a dual port NIC or two NICs that use the same module, run the command:

    esxcli system module parameters set -m ixgbe -p "max_vfs=8,8"

    Note: Add a comma and the value 8 for each additional NIC (for example, max_vfs=8,8,8 for three NICs, and so on). The number of virtual functions supported and available for configuration depends on your system configuration.

  2. Reboot the host to reload the driver with the configured parameters.
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
This is one of those things on my list of "gotta try it", but never gotten around to. I'm sure you'e seen this VMware Knowledge Base, but on the off chance that you haven't...

I know you said, you enabled all the right BIOS options etc...BUT..esxi disables "Virtual Functions" by default..so..
I have not seen that article! Thank you for pointing it out, couple things I can update based on my observations so far:
  1. If one uses the compatible hardware list from VMWare, LSI literally has "0" devices that has SR-IOV supported by ESXi (bummer), just as @rune-san mentioned above - if one searches for all hardware supporting SR-IOV it's all network or FCoE stuff
  2. I did not manually enable SR-IOV from console commands, but the article seems to reference ESXi 5.1/5.5, running 6.7 the host picked up the onboard quad port Intel i-350 LAN no problem, SR-IOV capable and all, this leads me to believe it's more issue on the SAS specific or VMWare support side of things
Thanks for the info again
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Yup, it was for the 5.x version of ESXI. And my google-fu turned up almost nothing in terms of SAS SR-IOV and ESXI. I suspect there is a reason for it...it's a driver limitation....by design.

With SR-IOV storage clustering/SDS would be much much easier...and guess what that will do to the Vmware VSAN sales... ;)