ZFS, 9211-8i, HP SAS Expander and 10x7k3000 2TB drives...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Short version:

Is there any good reason why the 9211-8i, HP SAS expander and Hitachi 5k3000 disks seems to be a perfectly happy combo, even at high IO stress levels, but is a fail for OI/SE11? Am I missing some magic config or an updated driver that I just can't locate?

Long Version:

I know the ZFS insiders discourage SAS expanders with SATA drives, but is there any reason this config won't work?

Supermicro X8SIA-F
32 GB Kingston ECC RDIM (4x8GB quad rank DIMMS)
Intel x3460
LSI 9211-8i with IT firmware (8.0.0.0-IT)
HP SAS expander firmware rev 2.06
10x Hitachi 5k3000 2TB drives
2x Hitachi 7200 RPM Travelstar 2.5" for system drives
10GBe Dual NIC
1000base-T quad NIC
ESXi 4.1 + OpenIndiana or Solaris 11 Express

Everything seems to be working with the hardware. MB/CPU passed a 24 hour test with Memtest86+. All 10 drives were exercised with a long-mode self test. 9211-8i was flashed and tested stand-alone, and it finds all 10 drives at boot time (actually, it finds all 11 since I have a spare in the system).

My plan was to load ESXi and do a ZFS all-in-one following Gea's instructions. I learned along the way that you can't only do PCI passthrough on this motherboard using the 2 PCIe x8 slots because they are sitting behind a PCI switch. This isn't a problem because I can put the SAS card in the "x8 in x16" slot.

ESXi installed no problem. Nice and easy, like always, except that it really, really, really wants to use that 10GBe NIC for its management network - unfortunately, its not hooked to anything quite yet. Easy enough to fix - but getting into the console requires pressing "F2" and SuperMicro's IPMI console keeps wanting to eat F2s...and there is nothing in the macro menu to send an F2. Off to get a USB keyboard and take a walk into the garage...

But...when installing OpenIndiana to the system it stalls forever at "examining disks". Try #2 was using Solaris 11 Express LiveCD install. Same problem. Try #3 was doing a "text only" install of Solaris 11 Express. This time it got to the same place, but the text-baed UI let a large number of timeout errors show up. Hmmm. Back to checking all my hardware, cables, everything... Not troubles found but the OS would still not install. After several hours of trying I gave up.

In order to find where the fault is I decided to load up a Linux guest, pass the 9211-8i through to it and start exercising some things. First .iso I found was Fedora-14 x64. Seemed good enough. Loaded it up as a guest. Easy. Passed through the SAS card - found all the drives right away. Ran a few bare-drive benchmarks on individual drives. No errors. Hmmm. Built up a 10-drive MDRaid raid-6 array. Worked perfectly. Took 18 hours to initialize, but it did work. Ran some bare drive benches on that array and looked really good (>1GByte/Sec reads, though writes seemed to be limited to just better than single-drive speed). Ran a few stress tests, no problems at all...no matter what I do I can't seem to get this config to fault.

So - here's the question again: is there any good reason why the 9211-8i, HP SAS expander and Hitachi 5k3000 disks seems to be a perfectly happy combo, even at high IO stress levels, but is a fail for OI/SE11? Am I missing some magic config or an updated driver that I just can't locate?
 

odditory

Moderator
Dec 23, 2010
381
59
28
I've had no problems with SE11, 9211-8i, HP Expander and Hitachi drives. SE11 supports the 9211-8i natively with the mpt2sas driver. My only guess is you're doing something wrong with ESXi passthrough. Granted I haven't played around too much with virtualizing SE11 but it did work in the brief testing I did. I was also using the SE11 text-based installer, FWIW, and for no particular reason other than it was faster than installing the full SE11 GUI.

I take "ZFS insiders discouraging SAS expanders" with a grain of salt. I've heard one guy with lots of theories and no apparent testing to back it up. Never accept any one person's word as gospel, remember "do your own testing".
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Well, don't know exactly why the 9211-8i didn't work in this configuration. Based on posts over at [H] it appears that there may be a disk comparability issue.

Tonight I pulled the 9211 and replaced it with an IBM1015 flashed with the 9240-8i firmware. Fired it back up and everything works perfectly. Built two raidz2 pools with napp-it - one using 10 Hitachi 5k200 2TB drives (brand new) and one using 10 Seagate 7200.11 1.5TB drives (yes - those drives - old ones left over from another project). Everything is perfect - Bonnie++ reports >500MB/s sequential block writes and over 900MB/s sequential block reads for each pool.

Created a couple of shares and started copying files...seeing sustained SAMBA writes >100MB/s, LAN utilization of 98% on a 1GBe lan. So far its a thing of beauty. We'll see what I can get from it in a couple of weeks when my 10GBe switch gets here!
 
Last edited:

jbraband

Member
Feb 23, 2011
44
0
6
PigLover, I see in this thread that you were pleased with the performance you were seeing on the ZFS pools. between your last post in this thread, and your thread on 10GigE in the networking forum, it seems that this is now a non-virtualized SAN/NAS. when you swapped the 9211-8i for the M1015 what were the issues you uncovered when virtualizing OI/SE11 in either Esxi or Hyper-V that led you to settling on bare metal SE11?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Re: 9211-8i vs IBM1015: I had trouble with the combination of: 9211-8i w/IT firmware, HP-SAS Expander, SE11 & ESXi using passthrough. I could get things working reliably using any combination of 3 of these 4 items (i.e., 9211-8i /w-out the expander, SE11 + ESXi passthrough worked great, but obviously only supported 8 drives, etc). I did not bother to troubleshoot it all the way to a conclusion because I had the M1015 (9240-8i) available and it worked great.

Re: The decision to "de-virtualize" the SAN:
Before finishing the all-in-one configuration, I wanted to be sure that all of the VMs would restart reliably from a cold start. Not that I expect many power-failures, but I wanted to be sure it was stable on autorestart if/when they happen. When running under ESXi, starting the SE11 VM became unreliable when it was assigned more than 2 virtual CPU cores. I do not totally understand this. It might have been related to the passthrough of the M1015 (I never tried it without it because it was not an interesting configuration). When assigned 3 cores SE11 would begin starting, crash, try again, maybe crash again, and then eventually start up. With 4 virtual cores it would just hang forever on about 1 in 3 restarts. Once it got started it was rock-solid stable even under stress-tests - whatever was happening simply affected starup. Unfortunately, with ZFS and the 10Gb interconnect I could not get acceptable performance unless I allocated at least 3 cores to the SAN and I wanted it stable on 4 (just because...). I built things per Gea's recommendation, using NFS storage on the SAN for the other VMs. This approach requires that startup on all of the other VMs on the machine needed to wait until the SAN stabilized and the unpredictable restart timing for the SAN made it impossible to set reliable "startup delay" values for the other VMs. I just couldn't get it all stable so I gave up and "de-virtualized" the SAN.

In the stand-alone configuration the SAN restarts reliably in less than 2 minutes, each time, every time. Its easy to get the machines that depend upon it to work right. Doing the all-in-one was never really required for me - I have the equipment available to make the stand-alone SAN reasonable. It was always more of a learning exercise for me. And even though I did not end up with a "Gea's all-in-one virtual SAN+server" configuration I still feel it was successful because I believe I learned a lot.
 
Last edited:

jbraband

Member
Feb 23, 2011
44
0
6
very good information, thank you very much for that. I have been muddling the last couple weeks on this exact system architecture to achieve disk redundancy with WHS2011. The only major difference is the SAN OS, i was planning on going OpenIndiana, but am flexible enough to switch that up after i get hardware and can start testing stuff. I am using this just in a home environment like yourself and for me, I think I can live with having the SAN VM autostart at boot and all other VMs not start. then manually turn them on once i know the SAN is up (IPMI will help wonders here). with that said, hearing about instability during boot of the SAN VM is not encouraging.

I too am looking forward to this from the education standpoint. sadly, i cant wait to have to troubleshoot this kind of stuff.

i have a couple backup plans that involve all the same hardware should this not work out which minimizes the risk of investing the money in new hardware specifically for the "all-in-one dream"

Thanks again piglover.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Yeah, if it were just me I'd be OK with manual restarts too...

But I travel frequently on business and getting calls like "dad, why can't I watch xyz" and then having to walk my wife or one of the teenagers through the restart just ain't gonna work for me.
 

unclerunkle

Active Member
Mar 2, 2011
150
38
28
Wisconsin
When running under ESXi, starting the SE11 VM became unreliable when it was assigned more than 2 virtual CPU cores. I do not totally understand this. It might have been related to the passthrough of the M1015 (I never tried it without it because it was not an interesting configuration). When assigned 3 cores SE11 would begin starting, crash, try again, maybe crash again, and then eventually start up. With 4 virtual cores it would just hang forever on about 1 in 3 restarts. Once it got started it was rock-solid stable even under stress-tests - whatever was happening simply affected starup.
This is quite interesting, at least I don't feel alone. This same issue has been bothering me for two weeks now. I actually was unable to narrow it down specifically until I tried your suggestion of only 2 CPUs. I can tell you that I just about tried to change every other setting though.

In any case, I still believe that there must be a fix for this. We could try setting the scheduling affinity in the Resources tab of the VM. More testing on that soon.

EDIT: No luck...
 
Last edited:

unclerunkle

Active Member
Mar 2, 2011
150
38
28
Wisconsin
For anyone interested in the 4 vCPU issue, Gea (developer of Napp-it) had this to say:

Gea said:
about ESXi and 4 vCPU for OI

I have had also sometimes (not always) boot problems with 4 cores assigned to OI.
There was also an advice from the german vmware-forum not to assign 4 cores to ESXi guests at all.

So i also suggest to assign 2 vCPU to each guests.
RAM is more critical, CPU is mostly a problem with encryption and a
smaller problem with compress, raid-z and higher levels of checksums
(For ESXi storage, Raid-10 is suggested, fastest option with lowest CPU demands)
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I agree. Most of the time you really don't need 4 CPUs just for the NAS. But I was setting things up to experiment with 10Gbe I/O from that same NAS and I really did see performance differences with just 2 CPUs.