Multiple HBAs or SAS expander for ZFS build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

firenity

Member
Jun 29, 2014
51
8
8
Hello.

I'm currently planning a new 24 disk ZFS storage build, probably based on a Supermicro SC846 series chassis (or similiar). I want to use a couple (10-12) of SATA SSDs in that, both for caching and a pure flash pool. The rest will be high capacity SATA HDDs, probably WD Reds.

In terms of getting the disks connected, I think I have the following options:
  1. Single SAS HBA + SAS expander
  2. Single SAS HBA + backplane SAS expander
  3. Multiple SAS HBAs (e.g. 2 HBAs, each w/ 4 x SFF-8087/SFF-8643)
What are the pros and cons of each?

I'm leaning towards using two HBAs at the moment, are there any disadvantes to that?
I figure, no added SAS expander "complexity" and avoiding possible bandwidth constraints (remember, SSDs!). Does that make sense?

Thanks!
 

Mike

Member
May 29, 2012
482
16
18
EU
If you can spare the pcie lanes get the adapters for more bandwidth and uptime.
 

bds1904

Active Member
Aug 30, 2013
271
76
28
Using newer (and faster) SSD's, avoid expanders. Expanders tend to limit performance in one way/shape/form. Some limit IOPS more than sequential read/write, some are great on IOPS but not seq read/write. Just another layer of complexity when you are using SSD's.

If you were just using spinners, an expander would be fine.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Definitely go multiple SAS Controllers since you are speaking SATA. Buggy SATA + SAS expanders equals super buggy reliability when things go wrong (with a drive!)
 

firenity

Member
Jun 29, 2014
51
8
8
Thanks for your input, everyone!
I will definitley avoid expanders then.

If you can spare the pcie lanes get the adapters for more bandwidth and uptime.
I understand that multiple adapters can be beneficial in terms of uptime when you, for instance, do a mirror across two HBAs, since any one of the two adapters could go down and the array would still be up.
However, if you're doing RAID-Z over two HBAs (instead of just one), you pretty much double your chances of it going offline, no?
I guess it all depends on how you set things up. What are your thoughts on this?

I have started to look for suitable motherboards for the SC846 case. My CPUs of choice would be the Xeon L5640 because they seem to represent a nice balance between cost (used), performance and power consumption/efficiency. Please let me know if you think there are better options out there.
I've been searching for boards with dual 1366 sockets, at least 8 DIMM slots and onboard IPMI. Supermicro seemed to be the only manufacturer to fit the bill.

I noticed that some of them had 2 x SFF-8087 SAS ports onboard, like the X8DT3-LN4F. So that would mean 8 of the 24 disks could be connected directly to the motherboard and wouldn't require a dedicated adapter - unless there's anything wrong with using the onboard ports?
In that case I would only need another 4 x SFF-8087 or two 2 x SFF-8087 SAS controllers to get all the disks connected. The latter is probably the cheaper option, e.g. 2 x M1015 or something like that.

Do I have to look for anything special in a SAS/SATA HBA when it comes to using (SATA) SSDs? Will pretty much any (proven) PCIe 2.0 x8 HBA work or are there differences between them in terms of IOPS or the like?
 

bds1904

Active Member
Aug 30, 2013
271
76
28
SAS2008 based controllers are the go-to for DIY storage boxes, even with SSD's. IBM M1015, LSI 9201-8i, etc are the best all-around choice. It'll give you good thruput and IOPS with SSD's as long as your motherboard can handle the PCIe lanes.

Just an FYI, you can often find out what chipset the mobo's use right on the manufactures website. The X8DT3-LN4F has onboard LSI 1068e 1st generation SAS controller which is not ideal for SSD's but would work great with spindle drives.
 

firenity

Member
Jun 29, 2014
51
8
8
SAS2008 based controllers are the go-to for DIY storage boxes, even with SSD's. IBM M1015, LSI 9201-8i, etc are the best all-around choice. It'll give you good thruput and IOPS with SSD's as long as your motherboard can handle the PCIe lanes.
That's good to hear.

Just an FYI, you can often find out what chipset the mobo's use right on the manufactures website. The X8DT3-LN4F has onboard LSI 1068e 1st generation SAS controller which is not ideal for SSD's but would work great with spindle drives.
There are also a couple of boards with SAS2008 based controllers: X8DTH-6F, X8DT6-F and X8DTL-6F. Unfortunately they don't seem to be very common on eBay (pricey).

I guess it will come down to using either
  • Mobo SAS (8 ports) + 2 x SAS HBAs (8 ports each)
  • 3 x SAS HBAs (8 ports each)
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
That's good to hear.


There are also a couple of boards with SAS2008 based controllers: X8DTH-6F, X8DT6-F and X8DTL-6F. Unfortunately they don't seem to be very common on eBay (pricey).

I guess it will come down to using either
  • Mobo SAS (8 ports) + 2 x SAS HBAs (8 ports each)
  • 3 x SAS HBAs (8 ports each)
We are using the X8DTH-6F in our SAN. We are going for multiple SAS2008 when we expand. We used the L5520 CPUs and they "limit" our throughput in some benchmarks (in real scenarios the network limits way before CPU).
 
  • Like
Reactions: Chuntzu

firenity

Member
Jun 29, 2014
51
8
8
We are using the X8DTH-6F in our SAN. We are going for multiple SAS2008 when we expand. We used the L5520 CPUs and they "limit" our throughput in some benchmarks (in real scenarios the network limits way before CPU).
Interesting.
Can you share a little bit more about your setup, please?

Do you use the mobo's SAS ports? Any SSDs in there? What case are you using?
Is this a ZFS box? Can you share those benchmarks? :)

Thanks!
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
I can confirm that when pushing the limits of iops and max sequential speeds the l5520 can limit speeds (at multiple GB/sec). I also found a white paper from fusion io or some other PCI ssd company that ran some tests comparing l5520 and some of the x56xx procs and came to the same conclusions. I really wish I still had that white paper. But anyway, 4-5gb/s is pretty easy to hit and multiple hundreds of iops are pretty easy using a couple LSI 2008 hbas. Getting above that takes some creativity but can be done, ie using 9202-16e controllers makes it pretty simple. But using a faster processor does help (by faster I mean GHz not just threads 4 v 6 core).
 

Mike

Member
May 29, 2012
482
16
18
EU
I can confirm that when pushing the limits of iops and max sequential speeds the l5520 can limit speeds (at multiple GB/sec). I also found a white paper from fusion io or some other PCI ssd company that ran some tests comparing l5520 and some of the x56xx procs and came to the same conclusions. I really wish I still had that white paper. But anyway, 4-5gb/s is pretty easy to hit and multiple hundreds of iops are pretty easy using a couple LSI 2008 hbas. Getting above that takes some creativity but can be done, ie using 9202-16e controllers makes it pretty simple. But using a faster processor does help (by faster I mean GHz not just threads 4 v 6 core).
You might find the multi queue block layer an interesting read; The multiqueue block layer [LWN.net]
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Interesting.
Can you share a little bit more about your setup, please?

Do you use the mobo's SAS ports? Any SSDs in there? What case are you using?
Is this a ZFS box? Can you share those benchmarks? :)

Thanks!
I had some big threads on our SAN build with benchmarks, setup details etc etc. Sadly they were lost in "the great STH crash" :(

I will check if i can find the benchmark results. I used iozone and ran the result through excel to get nice graphs. Unfortunately I dont think i have any local copy saved.

The SAN uses the following,
SuperMicro X8DTH-6F
CSE-216BA-R920L
2xKingston 60GB SSDNow for rpool
8x Crucial M500 SSD for storage
2xBrocade BR-1020 10GB CNA HBA PCIe Dual Port Standard Bracket Adapter
LSI 9211-8i SAS

We run OmniOS on it, one big raid-z2 pool on the crucial drives and mirrored rpool.
We will probably add a dedicated ZIL soon too to get better sync write performance. I.e. mirrored SSDs or a ZeusRAM.
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Funny thing, found the benchmark data. I really thought it was gone!
I have several excel files showing the performance using our setup with different number of SSD drives. This is with iozone and using 8 threads. Memory was only 16GB during the tests.

I.e. 8x M500 in raid-z2,

 

firenity

Member
Jun 29, 2014
51
8
8
Thanks for digging that up, legen!

I assume you were using 2 x L5520?
Did you also use the mobo's SAS ports for this?
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Thanks for digging that up, legen!

I assume you were using 2 x L5520?
Did you also use the mobo's SAS ports for this?
Yes, two L5520. Using more than 5 (if i remember correctly) SSDs in RAID-0 will cause the CPU to limit the performance.

We use motherboard ports for the two kingston SSD drives (rpool). We use LSI 9211-8i for connection to the other SSDs (raidz2).