HBA with the greatest throughput using SSDs?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hello,

I'm looking for the PCIe HBA with the absolute maximum throughput when used in JBOD mode. Any experiences to share?

So far, in my tests using IOMeter to test 128kb and 1MB read speed, the LSI 9200-8e is the champion at over 2,400 MB/Second with six SSDs and over 2,500 MB/Second with seven or eight SSDs. The newer LSI 9205-8e is no faster; in fact for some reason it was a bit slower in some test runs.

How do the 8265/9285 do in JBOD mode? How about the 3Ware controllers with their LSI hardware but different firmware?

To share some of my findings: My testing used OCZ Agility3 and Vertex3 120GB drives. With IOMeter 128kb transfers and a queue depth of 10 per drive, here is what the single-controller throughput looked like:

Disks: MB/Sec:
1 395
2 799
3 1228
4 1713
5 2147
6 2403
7 2520
8 2568

The interesting part (aside from the fact that SSD drives are insanely fast) is that while the eight-port LSI 9200-8e cards can push serious data, they can't keep up with eight drives and certainly can't saturate their 4GB/Second x8 PCIe connection. Has anyone gotten more throughput than this on a single HBA using say an SAS expander and tons of traditional drives? Using a different card?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
You could try a LSI9202 it has dual SAS2008 ROCs with 16e ports, get your self another 8x SSDs

I would have thought the LSI9205 with the SAS2308 ROC would have done better then the SAS2008 cards, shows the Controller has little input when it just moves data from drive to motherboard.

The LSI9260/80 can't do JBOD best is single drive RAID0, so the LSI9265/85 may not support JBOD either.
I'm finding my IBM M5015 (LSI9260) performs slightly slower than my M1015 (LSI9240/9211)

It looks like when the controller gets to around 5x SATA3 SSDs the bus is beginning to get saturated as there is no longer a steady increase.
You could get 2x controllers and run 4x SSD's on each, you should get to around 3.4TB vs 2.5Tb/s with 8x drives on the same controller.

Or just wait for SAS3/SATA4 controllers on PCIe 3.0 bus, then we should see better throughput
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hello,

Right now I have 25 SSDs total running on five RAID controllers for a total throughput of 10,800 MB/Second or so. I will need more capacity and will want more throughput, but I'm out of PCIe slots - thus the hunt for the greatest possible throughput per slot. I have seen the LSI 9202-8e you mention and it looks absolutely perfect... but of course it doesn't seem to be available anywhere. LSI calls it an OEM product as opposed to an LSI SKU, but their sales organization says that they aren't aware of anyone OEM'ing it yet.
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
24 SSDs wow, what do you run that would need that much throughput ?

Looks like LSI has not sold many LSI9202's as they are rather hard to find.
Tell them to send you one to review, it's not like you can not fill all 16 ports with SSD's :)

So are you running each SSD as a JBOD/Configured Good, ie each drive is individually seen by the OS?
Or running RAID0 ?

Have you tried 8x drives per controller but 2 sets of 4 drives in RAID0 ?
This may help max throughput.

Or for individual drives (no RAID) you'll be wanting a P6T7 WS SuperComputer
Then 6x LSI9211 cards with 4 or even 5x SSD's each to get the max through put before limit is reached
And one spare PCIe slot for a video card as well.
Problem solved :)
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Just as a note, I started with the P6T7 WS SuperComputer and had issues with the HP SAS Expander and the board. I have actually been giving ASUS feedback on that compatibility and they have been working on it with newer boards.

dba, might I suggest waiting a bit. One thought is that the new Xeon E5 series C600 platform supports PCIe 3.0 and some boards will have onboard PCH based SAS. Might be worth looking at when they come out.

As far as controllers go, LSI is doing a great job. PCIe 3.0 controllers will be a good place to look since you have more interface bandwidth which helps IOPS and throughput. I think you have a configuration that might warrant looking at the next gen.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Why do I need that much throughput? Good question. My project is nicknamed "The dirt-cheap data warehouse" and if you are familiar with data warehousing then that is enough of an answer. I am currently working with a data set that contains a bit over 5 Billion rows of data in the largest table. It is pretty normal to run a query that reads all of those rows and of course the customer want quick results!

I tested every conceivable combination of raid levels, hardware versus software RAID, stripe sizes, and so on to find the best solution for my particular use case. The most performant solution (by far) was to use the cards as straight HBAs (the OS can see every drive) and let Oracle ASM do the RAID using a feature called Oracle ASM which is an implemention of RAID1E. RAID1E stripes all data across all drives (providing the same great read speeds as RAID0) but also mirrors each chunk of data onto a second drive (providing redundancy). This differs from RAID10 which writes data to half of the drives and backup data to the other half. With eight drives, for example, a large-file RAID10 read will be serviced by 4 drives while with RAID1E the read will be serviced by all 8 drives. With multiple simultaneous readers a smart RAID10 algorithm will read from both halves of the mirror, boosting performance to RAID1E levels, but at lighter loads RAID1E will provide much better speed.

By the way, while I'm using Oracle ASM for the RAID processing, LSI cards offer a RAID1E implementation as well. I found that I could get better results by using the Oracle implementation, but I do recommend looking into RAID1E if you need fast reads and redundancy.

I *wish* that I could find a server motherboard with as many PCIe slots as the P6T7 you describe - one impressive motherboard! I need tons of RAM and quad CPUs, however, so I'm stuck with what I have for now. I'm looking into the LSI 9202 as one solution to my PCIe shortage and I'm also checking out a PCIe extender/splitter/bridge. I'd just buy an HP DL585 G7 (11 PCIe slots) but that's way above my budget for this project. Thanks to eBay, my 48-core, 256GB RAM, 10,000 MB/Second, redundant power SSD monster machine has cost me less than the price of an empty HP chassis.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
PCIe 3.0 does sound like a very welcome improvement, but unfortunately I can't wait. I imagine that it'll be a year or two before there are server boards and next-generation RAID controllers that support the new specification? The gaming community seems to get access to new technology much more quickly than anyone else.

Just as a note, I started with the P6T7 WS SuperComputer and had issues with the HP SAS Expander and the board. I have actually been giving ASUS feedback on that compatibility and they have been working on it with newer boards.

dba, might I suggest waiting a bit. One thought is that the new Xeon E5 series C600 platform supports PCIe 3.0 and some boards will have onboard PCH based SAS. Might be worth looking at when they come out.

As far as controllers go, LSI is doing a great job. PCIe 3.0 controllers will be a good place to look since you have more interface bandwidth which helps IOPS and throughput. I think you have a configuration that might warrant looking at the next gen.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I didn't know that the fancy LSI cards can't do JBOD - very good to know. I imagine that in my case the latency overhead of single-drive RAID0 would negate any possible throughput advantage of those card, if one exists.

You could try a LSI9202 it has dual SAS2008 ROCs with 16e ports, get your self another 8x SSDs

I would have thought the LSI9205 with the SAS2308 ROC would have done better then the SAS2008 cards, shows the Controller has little input when it just moves data from drive to motherboard.

The LSI9260/80 can't do JBOD best is single drive RAID0, so the LSI9265/85 may not support JBOD either.
I'm finding my IBM M5015 (LSI9260) performs slightly slower than my M1015 (LSI9240/9211)

It looks like when the controller gets to around 5x SATA3 SSDs the bus is beginning to get saturated as there is no longer a steady increase.
You could get 2x controllers and run 4x SSD's on each, you should get to around 3.4TB vs 2.5Tb/s with 8x drives on the same controller.

Or just wait for SAS3/SATA4 controllers on PCIe 3.0 bus, then we should see better throughput
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
How about the Supermicro X8OBN-F with 10x PCIe 8x slots (woohahaha)
Max 1TB DDR3 might be a little cramped :)
2x Dual 10core XEONs not really enough to play Solitaire :D

But you will probably want LSI920x controllers in just IT mode and let the OS take care of the RAIDing
For economy go for the IBM M1015 and crossflash to a LSI9211-8i (you can get about 4x M1015 for price of one LSI9211)
You can always reflash to IR mode and have RAID 1E or 0 and 10
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I'll put that $8K Supermicro monster on my Christmas list... right below the Aston Martin and Amel 64 yacht I'm fairly sure my wife won't be buying me. Dreaming is free, she reminds me.

I managed to buy four LSI 920x adapters for about $70 each. They turned out to be prototype LSI cards for some OEM customer with model numbers that never shipped, but they flashed to 9200-8e just fine and LSI even shipped me low-profile brackets at no charge. My server is 2U so I needed the external ports of the -8e versions to connect to the separate Supermicro 24-bay JBOD. I had an IBM M1015 card from an earlier prototype and it performed identically to the LSI cards after cross-flashing - which isn't surprising. Actually, I think that I discovered the cross-flash trick on this web site. Thanks, whoever posted that info.

How about the Supermicro X8OBN-F with 10x PCIe 8x slots (woohahaha)
Max 1TB DDR3 might be a little cramped :)
2x Dual 10core XEONs not really enough to play Solitaire :D

But you will probably want LSI920x controllers in just IT mode and let the OS take care of the RAIDing
For economy go for the IBM M1015 and crossflash to a LSI9211-8i (you can get about 4x M1015 for price of one LSI9211)
You can always reflash to IR mode and have RAID 1E or 0 and 10
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
So your dream would be to drive the Aston to your yacht which houses the Supermicro server, enjoying the roast your wife has just cooked ?

You've probably got the best controllers for your setup.
It will only get better when PCIe3.0 devices come out.
And then you'll be needing SAS3/SATA4 SSDs to use the bandwidth, but then you'll be asking 'what is the greatest throughput controller ?' as I'm only getting 5GB/s :)
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
...Looks like LSI has not sold many LSI9202's as they are rather hard to find.
Tell them to send you one to review, it's not like you can not fill all 16 ports with SSD's :)...
By the way, the bad news is that the LSI OEM sales group says that the LSI SAS9202-16e isn't even available to OEMs right now. On the other hand, the news is that the card will be released to the retail channel in the "June/July" timeframe - and that's good news. With two SAS2008 controllers in an x16 card, it should be able to push 5GB/Second. I'll take two of them please!