LSI 9207-8i - Successor to 9211?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

newthru

New Member
Jun 15, 2012
1
0
0
Hello all,

I see LSI has this card on their HBA section for PCIe 3. Does anyone have any experience with it and is it considered the successor of the 9211? I presume JBOD/IT/passthru is supported?
 

mobilenvidia

Moderator
Sep 25, 2011
1,952
225
63
New Zealand
The LSI 92x7 range is the PCIe 3.0 upgrade that the SAS2308 supports.
All the HBA's are SAS2308 based now.

But it basically is an PCIe updated LSI9205

These slipped by me, have some work to do now.
 

lenard

New Member
May 27, 2012
5
0
0
Mobilenvidia,

I noticed that the 2308 has a cache and the 9207 doesn't. Wouldn't the latter be better for using with XFS since there is no purpose for the cache (or does the cache get used in IT mode)?

Thanks.
 

mobilenvidia

Moderator
Sep 25, 2011
1,952
225
63
New Zealand
The SAS2308 and SAS2008 have a tiny little cache, for the processor to work in.
Not used for anything else, that I know of, as RAID5 is absolutly terrible on these HBA's but RAID 0 and 1 are great, and IT mode even better.

The 9207 = SAS2308
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The LSISAS2308 is a SAS/SATA controller chip. The LSI 9207-8i/8e is a SAS/SATA PCIe card based on the LSISAS2308 controller chip. The 9207 card is sold as an HBA (host bus adapter) as opposed to a RAID card, even though it does support the less complex RAID levels like RAID0, RAID1, RAID10, and RAID1E. The 9207 card does not support RAID5, 6, 50, or 60, all of which would require large RAM caches in order to achieve good performance for "hardware" RAID (RAID operations performed by the PCIe card).

That said, "sofware" RAID (RAID operations performed by the host operating system, e.g. using mdadm with XFS) provides very good performance on multi-core CPUs so you don't always need to go with more expensive hardware RAID cards.

Mobilenvidia,

I noticed that the 2308 has a cache and the 9207 doesn't. Wouldn't the latter be better for using with XFS since there is no purpose for the cache (or does the cache get used in IT mode)?

Thanks.
 

odditory

Moderator
Dec 23, 2010
384
72
28
Hello all,

I see LSI has this card on their HBA section for PCIe 3. Does anyone have any experience with it and is it considered the successor of the 9211? I presume JBOD/IT/passthru is supported?
The 9207 isn't LSI's next-gen, but an in-between generations SAS2 PCIe 3.0 offering. The true successor to the 9211 (SAS2008) will be the 9311 (SAS3008) due in 2013 and will feature true nextgen SAS3 12Gb/s support.

On a sidenote the next-gen LSI RAID offering will be the SAS3108 controller, also with PCIe 3.0, as well as 1866MHz DDR3 cache mem and SAS3 12Gb/s support, and also due in 2013. It's the SAS3108 that we'll also find under the hood of the next-gen Areca, IBM, Intel RAID offerings assuming LSI continues with their standing OEM relationships. To match their nextgen SAS3 controller offerings, their new expander silicon - the SAS3x48 - is already in the hands of OEM partners like Supermicro, Intel, Areca, from my understanding.

LSI has a pretty aggressive roadmap, that's for sure. A major expansion of their own retail product offerings with a significant push into the area of flash-based caching/acceleration that will leverage PCIe 3.0 is happening as well.
 
Last edited:

ehorn

Active Member
Jun 21, 2012
342
52
28
ehorn, are those your results? Pretty darn good!
Yea, there was some concern that the chenbro 24-bay I picked up did not come with the 6Gb/s backplane as they made two backplanes for that chassis and the seller could not confirm from the P/N alone. No more concerns though... :)

Did a couple of runs this morning. The drives are fresh and raw. I did not gather enough data points to plot the Queue Depth curves and still want to gather more info around higher Kb mixed loads (roughly my intended usage). I think these drives should perform quite well in that scenario (at their price point). But today was more about smoke testing than finding optimum and I figured I would throw these screenies up.

For me it confirms the reviewers findings - the 9207 is a very nice HBA.

peace,
 

normadize

New Member
Oct 11, 2012
6
0
0
It is a solid HBA, no doubt...

Here are a couple of recent reviews:

www.thessdreview.com/our-reviews/sata-3/lsi-sas-9207-8i-pcie-3-0-host-bus-adapter-quick-preview/

www.tweaktown.com/reviews/4882/lsi_...controller_host_bus_adapter_review/index.html

Here are a few IOMeter stats with (8) Sandisk Extreme 240GB on a value-based, consumer grade 1155 board...
Hi,

I've seen those reviews as well. There's another one here: http://www.servethehome.com/lsi-9207-9217-hbas-sas2308-6gbs-sas-sata-pcie-30/

I apologize in advance for my lack of knowledge, I hope a kind soul can spare a few seconds to explain something easy.

I know that by default the LSI 9207 comes with the IT firmware and I can flash the IR f/w to get basic RAID 1 and 0 support. What I don't yet understand is the high throughput and IOPS that were obtained in those reviews (4+ GB/s and 400,000+ IOPS), which do not use those drives in RAID. They appear to be in JBOD.

I never used JBOD. Were those 8 drives in the above reviews of the LSI 9207 connected in a plain JBOD and nothing else? Or was the JBOD coupled with a software RAID in the OS (e.g. mdadm)?

For instance, if I get the 9207 and I hook up 4 Samsung 830 drives in JBOD and do nothing else, should I expect roughly 3-4 times speed increase?

I was under the impression that JBOD was not boosting performance at all. I know I'm missing something dead simple.

Cheers

p.s. Edit: I'm looking to speed up 4 SSDs (but no RAID 5 or anything fancy). Is the 9207 with IR f/w equivalent to the 9211 in functionality?
 
Last edited:

ehorn

Active Member
Jun 21, 2012
342
52
28
Hello normadize,

The benchmarks are basically configured as separate volumes being accessed individually by worker(s)/manager(s). Such as configuration does not represent typical use cases. Rather it is intended to find the performance/saturation levels of the controller.

Many use cases favor fewer (and larger) volumes. That being said, A performant (and usable) configuration for the 9207 would be IT mode and configure the drives with OS striping. This will likely give the best performance, IMHO.

Aside from a very few, (and very costly SSD's) - (4) current gen, 6Gb SSD's will not saturate a 9211 controller. If 4 is your target configuration, a 9211 would be fine and present no bottleneck.

HTH.

peace,
 

normadize

New Member
Oct 11, 2012
6
0
0
Ok, so I remembered correctly that JBOD does not increase read/write speeds for one thread.

What I need (and I actually do need) is very fast read/write also with high IOPS but as affordable as possible. Note I didn't say "cheap". The application in question is a monster mathematical code that requires about 750-800 GB of RAM (yes, RAM). We can't afford purchasing a server class system with 1 TB of actual RAM so my only option is to swap to disk. Swapping to a hard drive slows down the execution time to several weeks; we can't have that. The compromise solution is a fast array of SSDs to hold a huge swap partition.

We are willing to spend some dosh on a very good RAID 0 controller and some fast SSDs. Any small percentage of time saved can translate to up to several hours speedup for our simulations since we'll be swapping heavily a large variety of data (huge chunks as well as small chunks).

Samsung 840 Pro or Vertex 4 is what I have in mind for the SSD. Those drives seem unbeatable at the moment (high throughout, high IOPS)

The controller is where I'm still undecided. Can anyone tell me whether a 4 drive software raid 0 using mdadm in linux on an LSI 9207 would be slower / same / faster than a hardware RAID 0 using an LSI 9271? How about an 8 drive setup?

I know the 9271 would be underused as I won't be using any RAID 5/6 but it's the speed of RAID 0 that I'm most interested in. If I am to use 8 drives, the 9207 comes much cheaper.

Cheers

p.s. I was told that the 9207 with IR firmware is much slower than with the IT firmware but I still don't know about a RAID 0 scenario (IT would require mdadm). Can anyone comment on this too?
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hello normadize,

All of the below assumes that you mean "operating system swap" when you say "swap".

You have two good choices:
1) Build a speedy swap partition using a battery-backed RAID card as you describe.
2) multiplex your swap.

OS swapping moves small chunks of memory to disk and back as needed - 4kb chunks are the norm if I remember correctly. If the OS moves a large set of related chunks at one time, the IO could look relatively sequential. You'll need to figure out, using your favorite tool, the average swap IO size for your particular application. This is important - you really can't optimize your system without knowing what kind of IO you are generating. I'll assume that you don't yet know this information, so I'll talk generally.

The "best" RAID swap approach would be SSD drives in RAID10 or RAID0 (trading speed for reliability) behind a RAID card with a large battery-backed cache. On PCIe3 I'd look at the LSI 9286* and 927* series. On PCIe2, you can use the same card or step down to the 9285* and 9265* series. If your swap workoad is high-IOPS, then it would be worth testing out the FastPath option to the above cards. With this setup, you can expect no more than 2,500MB/S maximum throughput on PCIe2 and around 4,000MB/S on PCIe3. I haven't tested IOPS on these cards, but I'd expect 200K to 400K maximum. Actual swap performance will be lower, of course. I'd be very surprised if the swap code was optimized to drive storage as hard as would be required to tax these cards. In any case, you'd see far better swap performance than before.

The alternative is to trade the expense and complexity of a RAID card for a far cheaper host bus adapter and let the OS to the work. This might also provide better performance than a RAID card.
Both Linux and Windows support multiple swap files/partitions, and with the right configuration, the operating system "stripes" across all swap storage like a RAID.
In Windows, just add a few SSD drives and then add a swap file to each. With eight 200GB swap files on eight SSD drives, you'll have more swap IO than anyone in history.
In Linux, you can add multiple swap partitions, but there is some magic required to allow the OS to use all of them at the same time. Each partition has to have a priority set, the priority of all of your SSD-based swap partitions must be the same, and the priority for the SSD-based swap partitions must be the highest priority of all of your swap priorities. Get it wrong and you get sequential swap files, not swap file striping.

RAID or HBA - you won't know which will provide better performance unless you test. Consider doing a mini-test by moving swap to one or two SSD drives using existing SAS or SATA ports - no HBA or RAID card expense yet. Don't just add the SSDs to your swap configuration, replace your existing swap with SSD.

You will also want to tune vm.overcommit_memory and swappiness parameters if you are on Linux. I would also investigate huge pages on your system. Lastly, for the future, it might be worth looking at software for distributed shared memory. With such software, plus two Infiniband cards and a cable from eBay, you could link two servers with 512GB RAM each (using cheap 16GB RAM sticks) into a 1TB monster.

As for your other questions:
- Both the Samsung 840 pro and Vertex4 drives are very new. It'll be some time before we know how they perform in server environments. I was very excited about the Vertex4, but I've seen some quirky results and so have not invested.
- RAID0 is pretty simple and there won't be a big throughput difference between 9207 hadware RAID (using IR firmware), 9207 mdadm RAID (IT or IR firmware), and 9271 hardware RAID. For smaller transfer sizes, there may be differences.
- According to my tests, there is no performance difference between IR and IT firmware when you aren't using the RAID functionality.
 
Last edited:

pettakos

New Member
Oct 9, 2012
15
0
1
I guess the same applies to me as well.
I am running WHS 2011, with 8 mechanical drives on a Supermicro AOC-SASLP-MV8,
and planning on replacing it with sth faster like the above mentioned controller.
Would I benefit from it, or should I spend less than that on an IBM M1015.
It is mainly used as a file server, streaming data across my home network.
Will I see any speed increase, in data transfer rates whether I use the 9207-8i compared to let's say 9211-8i, or not?
And for the near future, i am not planning on replacing TB's of data with ssd.
Any advice is greatly appreciated.
 

normadize

New Member
Oct 11, 2012
6
0
0
Hi dba,

Many thanks for taking the time to reply with such a thorough and informative answer. I very much appreciate it. I have a few more queries below if you could waste some more time.

All of the below assumes that you mean "operating system swap" when you say "swap".

You have two good choices:
1) Build a speedy swap partition using a battery-backed RAID card as you describe.
2) multiplex your swap.

OS swapping moves small chunks of memory to disk and back as needed - 4kb chunks are the norm if I remember correctly. If the OS moves a large set of related chunks at one time, the IO could look relatively sequential. You'll need to figure out, using your favorite tool, the average swap IO size for your particular application. This is important - you really can't optimize your system without knowing what kind of IO you are generating. I'll assume that you don't yet know this information, so I'll talk generally.
Your assumption is correct. Matlab is the application in question and I was actually just thinking of investigating the structure of memory data that it uses when swapping in order to choose the best SSD. Can you recommend any such tools (preferably for Linux but Windows would be fine too)?

The "best" RAID swap approach would be SSD drives in RAID10 or RAID0 (trading speed for reliability) behind a RAID card with a large battery-backed cache. On PCIe3 I'd look at the LSI 9286* and 927* series. On PCIe2, you can use the same card or step down to the 9285* and 9265* series. If your swap workoad is high-IOPS, then it would be worth testing out the FastPath option to the above cards. With this setup, you can expect no more than 2,500MB/S maximum throughput on PCIe2 and around 4,000MB/S on PCIe3. I haven't tested IOPS on these cards, but I'd expect 200K to 400K maximum. Actual swap performance will be lower, of course. I'd be very surprised if the swap code was optimized to drive storage as hard as would be required to tax these cards. In any case, you'd see far better swap performance than before.
The 9286 is pricey. I was looking at the 9271 with 4 ports initially but I am also considering the 8 port version and use 8 smaller SSDs to get the same capacity but higher speeds. The snag is that from 4 to 8 ssds some tests do not show a linear speed and IO increase, but I haven't seen RAID tests for these cards. Do you know anything about the performance of the 9266? I haven't found any reviews.

The alternative is to trade the expense and complexity of a RAID card for a far cheaper host bus adapter and let the OS to the work. This might also provide better performance than a RAID card.
Both Linux and Windows support multiple swap files/partitions, and with the right configuration, the operating system "stripes" across all swap storage like a RAID.
In Windows, just add a few SSD drives and then add a swap file to each. With eight 200GB swap files on eight SSD drives, you'll have more swap IO than anyone in history.
In Linux, you can add multiple swap partitions, but there is some magic required to allow the OS to use all of them at the same time. Each partition has to have a priority set, the priority of all of your SSD-based swap partitions must be the same, and the priority for the SSD-based swap partitions must be the highest priority of all of your swap priorities. Get it wrong and you get sequential swap files, not swap file striping.
I'm quite interested in this scenario. The system has 32 GB of ram and no swap at all at the moment; the SSDs would hold the entire swap so it's not an issue to prioritize it.

I had no idea that Windows stripes across multiple swap partitions automatically and that Linux would do the same if the swap partition priorities are equal. Are you really sure about this? I was under the impression that both OSs would use a 2nd swap files/partitions only if the 1st one filled up. In other words, would sequentially write to them. I am now very curious to test this and find out.

As above, could you recommend a software tool that can show read/write and/or IO speed for swapping under Linux?

With such a tool I could test other scenarios too, including using mdadm and one big swap partition, multiple swaps with equal priorities and also a h/w raid to compare all possible setups.

RAID or HBA - you won't know which will provide better performance unless you test. Consider doing a mini-test by moving swap to one or two SSD drives using existing SAS or SATA ports - no HBA or RAID card expense yet. Don't just add the SSDs to your swap configuration, replace your existing swap with SSD.
Indeed that's a good idea to try out the swapping to multiple partitions trick.

I already have a Samsung 830 256GB which is the only swap. At the moment it's too little and too slow, which is whay I need more swap space and also faster swap.

You will also want to tune vm.overcommit_memory and swappiness parameters if you are on Linux. I would also investigate huge pages on your system. Lastly, for the future, it might be worth looking at software for distributed shared memory. With such software, plus two Infiniband cards and a cable from eBay, you could link two 512GB RAM servers (using cheap 16GB RAM sticks) into a 1TB monster.
I think that will turn out to be too expensive for now. However, I may have misunderstood you: what do you mean by "ram server"? Do you have a link to such a thing? Or are you talking about an actual server machine with a mobo supporting up to 512 GB? (those are beyond our reach right now).

As for your other questions:
- Both the Samsung 840 pro and Vertex4 drives are very new. It'll be some time before we know how they perform in server environments. I was very excited about the Vertex4, but I've seen some quirky results and so have not invested.
- RAID0 is pretty simple and there won't be a big throughput difference between 9207 hadware RAID (using IR firmware), 9207 mdadm RAID (IT or IR firmware), and 9271 hardware RAID. For smaller transfer sizes, there may be differences.
- According to my tests, there is no performance difference between IR and IT firmware when you aren't using the RAID functionality.
I would love to see some performance results showing 9271 h/w raid and 9207 mdadm raid.

From your experience or gut feeling, how would you rank the following in terms of swap throughput and IOPS

a) 9207 mdadm raid0,
b) 9207 IR raid0,
c) 9271 h/w raid0 and
d) 9207 with individual swap partitions

?

I know that mdadm would use the CPU but that's ok actually, because while Matlab is swapping the CPU is almost not used at all for processing -- I've already seen this with the swap on the Samsung 830, the CPU usage drops to practically 0% until the entire data has been written or read from swap.

Once again, many thanks for your reply.

Cheers
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hi dba,

...I'm quite interested in this scenario. The system has 32 GB of ram and no swap at all at the moment; the SSDs would hold the entire swap so it's not an issue to prioritize it

I had no idea that Windows stripes across multiple swap partitions automatically and that Linux would do the same if the swap partition priorities are equal. Are you really sure about this? I was under the impression that both OSs would use a 2nd swap files/partitions only if the 1st one filled up. In other words, would sequentially write to them. I am now very curious to test this and find out.

Cheers

My very first thought is this: Can't you spring for more RAM? 700GB of data and 32GB of RAM does not sound promising, no matter how fast you can swap. Anyway, back to swap multiplexing:

I am 100% sure that multiplexed swap files improve swap performance on both Windows and Linux - I do it all the time. Linux uses multiple swap files sequentially by default but the documentation talks about the configuration parameters required to get striped swap usage. Windows doesn't say, but I suspect striped usage is the only option.

Of course I have not multiplexed to the extent that you are talking about, so I can't say how the technique will work with that many SSD drives.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hi dba,

Many thanks for taking the time to reply with such a thorough and informative answer. I very much appreciate it. I have a few more queries below if you could waste some more time.

Your assumption is correct. Matlab is the application in question and I was actually just thinking of investigating the structure of memory data that it uses when swapping in order to choose the best SSD. Can you recommend any such tools (preferably for Linux but Windows would be fine too)?...
Cheers
Sorry, I can't remember the exact Linux tools that I used. I remember using a built-in Linux script to gather the raw data and then sar or something to analyze it. For Windows, I once read a Microsoft report that broke down swap usage patterns across several thousand servers to show the breakdown between small and large block transfers for both reads and writes - they had to get that data somehow and it was probably WPM. I haven't needed to dig into that level of detail.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hi dba,

...I would love to see some performance results showing 9271 h/w raid and 9207 mdadm raid.

From your experience or gut feeling, how would you rank the following in terms of swap throughput and IOPS

a) 9207 mdadm raid0,
b) 9207 IR raid0,
c) 9271 h/w raid0 and
d) 9207 with individual swap partitions

Cheers
Because your use case is 100% OS swapping, and that swapping is generated by one specific application, you are in an very unusual position. I don't think that you can rely on any existing benchmarks and will have to run your own tests. I'd try multiplexed swap first because it's cheaper, simpler, and easier to implement.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hi dba,

I think that will turn out to be too expensive for now. However, I may have misunderstood you: what do you mean by "ram server"? Do you have a link to such a thing? Or are you talking about an actual server machine with a mobo supporting up to 512 GB? (those are beyond our reach right now).

Cheers
If you have a nice server motherboard with 32 RAM slots, then you can get 1TB of RAM by buying 32GB RAM sticks. Unfortunately, 32GB modules are $900 each and that's $30K in RAM.
On the other hand, 16Gb modules are $130 each. 32 of these cost just $4K, but you'll only have 512GB. Buy two such servers and you've spent just $8K in RAM to get 1TB, but it's split across two servers. There are several different technologies, however, to turn those two servers into one big virtual server. See scalemp.com for one commercial example. Finding an open source equivalent that works with Matlab could be an insurmountable challenge, however.
 
Last edited: