ESXi : Is PCI-Passthrough really designed for storage?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Loto_Bak

New Member
Mar 10, 2011
29
15
3
Has anyone had experience passing through a large amount of storage through ESXi?

I am contemplating the following scenario for my upcoming build...
Supermicro x9scm
4 IBM M1015 SAS controllers (SAS2008)
24+hdds in ZFS under either opensolaris or freebsd

Is passing through the storage to a solaris/freebsd VM and NFS sharing it back to the ESXi server for the other VMs a reasonable installation?

I have a few concerns I was wondering maybe someone has real world experiance with.
- I've heard there are issues with the number of interupts I'll see by using the ESX virtual switch to route all VM hdd traffic. End result severely slowing down my machine.
- Will I be able to take advantage of AES hw acceleration through the ESXi VM layer?
- Will my HDD I/O suffer greatly by using PCI-Passthrough?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Your hdd performance with passthrough will be almost identical to running a regular OS.

Unless you are running with several bonded gig-e nics the switch will handle it just fine. The onboard Intel nics on that board offload most of the heavy lifting so no worries.

Biggest problem you will have is that ESXi only allows two passthrough devices per vm... You can't run 4 HBAs unless you split them across two seerate VMs, which does not appear to be what you have in mind.

Also, two of the PCIe slots on that board only run at x4 speed. The m1016s in those slots will underperform.
 

xnoodle

Active Member
Jan 4, 2011
258
48
28
PCI-E 2.0 x4 is 2GB/s so I'd imagine the performance difference is negligible unless those were 24 SSDs and not HDDs.

You can delay the bootup of others VMs so they boot after the NAS VM starts up.

gea has a thread or two here and over at [H] with links to a mini-howto.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
ESXi won't "pass through" more than 2 PCIe devices to a single VM, so the OPs original configuration won't work at all.
You argue that 24 spinning disks (at least 24 7200RPM consumer drives) can't overdrive a single PCIe x8 connection. Fair point.
I'd further argue that these same 24 spinning drives can't overdrive the 8x600MB/s SAS channel you'd get by using a SAS-2 expander like the HP.

Taken together, I'd say this makes a pretty fair argument that the best config to meet the OPs requirements would be a single M1015 HBA and a 36-port SAS2 expander like the HP.
 
Last edited:

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
ESXi won't "pass through" more than 2 PCIe devices to a single VM, so the OPs original configuration won't work at all.
You argue that 24 spinning disks (at least 24 7200RPM consumer drives) can't overdrive a single PCIe x8 connection. Fair point.
I'd further argue that these same 24 spinning drives can't overdrive the 8x600MB/s SAS channel you'd get by using a SAS-2 expander like the HP.

Taken together, I'd say this makes a pretty fair argument that the best config to meet the OPs requirements would be a single M1015 HBA and a 36-port SAS2 expander like the HP.
Couldnt have said it better.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
ESXi won't "pass through" more than 2 PCIe devices to a single VM, so the OPs original configuration won't work at all.
You argue that 24 spinning disks (at least 24 7200RPM consumer drives) can't overdrive a single PCIe x8 connection. Fair point.
I'd further argue that these same 24 spinning drives can't overdrive the 8x600MB/s SAS channel you'd get by using a SAS-2 expander like the HP.

Taken together, I'd say this makes a pretty fair argument that the best config to meet the OPs requirements would be a single M1015 HBA and a 36-port SAS2 expander like the HP.
On three of my All-In-Ones i use 3 SAS controllers with my 24 x 2,5" Bay SuperMicro Cases.
If I remember correctly, you can pass-through up to 6 Adapters with current ESXi 4.1.

I would avoid Expanders whenever possible.
Use 3 SAS Adapters instead - fast, easy and no problems.

Currently i would not like to be the first to search sandy bridge problems.
If i need a server now, i would use a MB from Supermicro X8...-F series
with 3420 or 5520 chipset.

And yes, PCI-passthrough is the only way to virtualize a storage-server
without restrictions (asumed enough cpu power and ram)

Gea
 
Last edited:

unclerunkle

Active Member
Mar 2, 2011
150
38
28
Wisconsin
I believe this would be the best place to post this so here goes...

My hardware is as follows:
SuperMicro X9SCM-F
Xeon E3-1230
16GB ECC DDR3 UDIMM
2x IBM M1015
VMWare ESXi 4.1 U1

During the installation of VMWare onto a USB stick with the two IBM M1015 cards in, the installation would hang at "Loading module megaraid_sas". After removing the cards, the installation completed successfully without issue. Fully booting the hypervisor works successfully as well. However, whenever I try to put in 1 or both of the IBM 1015's in, the hypervisor freezes on startup.



VMWare lists the M1015 (LSI 9240-8i) to be compatible with ESXi 4.1 U1 here. I have updated the firmware on both IBM cards to the latest on LSI's site for the 9240-8i (4-MAR-11). A quick Google search on the error, does come up with a fix from VMWare. However, the fix is a bios update for Dell servers.

Does SuperMicro need to come out with an updated BIOS for the X9SCM-F or am I missing something here? Odditory, you have been testing these and have a similar config, can you help?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,802
113
I'm working on getting a SM board in-house to help with this. Will keep you posted.
 

unclerunkle

Active Member
Mar 2, 2011
150
38
28
Wisconsin
Ah ha!, looks like a found a solution! :sigh:

In the BIOS, under Advanced > PCIe/PCI/PnP Configuration there is an option called "PCI ROM Priority". In order for the IBM ServeRAID M1015 cards to be able to boot up, this value needs to be set to "EFI Compatible ROM" and NOT "Legacy ROM". I guess it makes sense, but it's not something that is documented. Hopefully my post will save others some hassle in the future!



Patrick, I'm looking forward to your comments on this motherboard in the future :D
 

nilsga

New Member
Mar 8, 2011
34
0
0
Does it work with VMDirectPath also? Can you pass through the motherboard SATA controller?
 

unclerunkle

Active Member
Mar 2, 2011
150
38
28
Wisconsin
Yes, I was able to pass-through the SAS controller in VMWare. I haven't tested it yet in a VM, but I do not expect any issues.

EDIT: Hold that thought. You are talking about the motherboard controller. I did see that as an option, but I didn't try it. I wouldn't think there would be issues there either though.
 

nilsga

New Member
Mar 8, 2011
34
0
0
Yes, I was thinking about both the SAS-controller and the onboard SATA kontroller. I've read that even though they show up as available for passthrough, it doesn't always work.
 

Watch22

New Member
Apr 21, 2011
1
0
0
I started reading this thread and decided to join the forum. I'm currently running a X9SCM-F with passing through the storage controller to ESXi 4.1 update 1. This configuration runs at the same speed as a physical server. Here is what I'm running. I have been running in this configuration for around a week and I have been very happy. I have around 10 VM's currently running on this box. This is my home playground.

X9SCM-f
Xeon E3-1230
16GB Ram
Perc 6/i with 2 x 300GB 15K SAS Raid 1
M1015 with 2 x 640 SATA Raid 1
6 x 2TB SATA running off the Internal SATA
Intel Dual Port Pro 1000 ET
Intel Dual Port Pro 1000 CT

Here is my setup
My untangle server is running off a vmdk with the Pro 1000 CT adapter being direct passthrough. I found this to be a great config for the UTM appliance. I can max out my 20MB connection and the box barely uses any resources.

My File server is using the internal SATA ports all 6 in direct passthrough. The OS runs on a VMDK and the 6 SATA run off the internal ports. I can get full speed when copying to and from the VM.

The Pro 1000 ET card is used for the internal switch. The reason for using the ET adapter is for the VMDq.
 

Loto_Bak

New Member
Mar 10, 2011
29
15
3
Thanks everyone for your insights.

I'm currently running my X9SCM w/2x M1015s at the moment.
Thanks unclerunkle for your megaraid_sas fix. Ran into that issue myself.
To expand with more info, with 'legacy bios' set I could run the two cards in slot 1 and 3 (counting from the top) only. Changing the setting lets me run them in any PCIe slot.

That said, it hasn't been terribly smooth sailing.

Has anyone had any luck with passing through the 9240s (ibm m1015s) with Solaris express 11?
I'm using the imr_sas driver from LSIs site version 3.03 yet I cant get it to detect the cards.

unclerunkle you seem to have a near identical setup to mine, what OS is driving your 9240s?