Mobilenvidia,
I noticed that the 2308 has a cache and the 9207 doesn't. Wouldn't the latter be better for using with XFS since there is no purpose for the cache (or does the cache get used in IT mode)?
Thanks.
The 9207 isn't LSI's next-gen, but an in-between generations SAS2 PCIe 3.0 offering. The true successor to the 9211 (SAS2008) will be the 9311 (SAS3008) due in 2013 and will feature true nextgen SAS3 12Gb/s support.Hello all,
I see LSI has this card on their HBA section for PCIe 3. Does anyone have any experience with it and is it considered the successor of the 9211? I presume JBOD/IT/passthru is supported?
Yea, there was some concern that the chenbro 24-bay I picked up did not come with the 6Gb/s backplane as they made two backplanes for that chassis and the seller could not confirm from the P/N alone. No more concerns though...ehorn, are those your results? Pretty darn good!
Hi,It is a solid HBA, no doubt...
Here are a couple of recent reviews:
www.thessdreview.com/our-reviews/sata-3/lsi-sas-9207-8i-pcie-3-0-host-bus-adapter-quick-preview/
www.tweaktown.com/reviews/4882/lsi_...controller_host_bus_adapter_review/index.html
Here are a few IOMeter stats with (8) Sandisk Extreme 240GB on a value-based, consumer grade 1155 board...
Your assumption is correct. Matlab is the application in question and I was actually just thinking of investigating the structure of memory data that it uses when swapping in order to choose the best SSD. Can you recommend any such tools (preferably for Linux but Windows would be fine too)?All of the below assumes that you mean "operating system swap" when you say "swap".
You have two good choices:
1) Build a speedy swap partition using a battery-backed RAID card as you describe.
2) multiplex your swap.
OS swapping moves small chunks of memory to disk and back as needed - 4kb chunks are the norm if I remember correctly. If the OS moves a large set of related chunks at one time, the IO could look relatively sequential. You'll need to figure out, using your favorite tool, the average swap IO size for your particular application. This is important - you really can't optimize your system without knowing what kind of IO you are generating. I'll assume that you don't yet know this information, so I'll talk generally.
The 9286 is pricey. I was looking at the 9271 with 4 ports initially but I am also considering the 8 port version and use 8 smaller SSDs to get the same capacity but higher speeds. The snag is that from 4 to 8 ssds some tests do not show a linear speed and IO increase, but I haven't seen RAID tests for these cards. Do you know anything about the performance of the 9266? I haven't found any reviews.The "best" RAID swap approach would be SSD drives in RAID10 or RAID0 (trading speed for reliability) behind a RAID card with a large battery-backed cache. On PCIe3 I'd look at the LSI 9286* and 927* series. On PCIe2, you can use the same card or step down to the 9285* and 9265* series. If your swap workoad is high-IOPS, then it would be worth testing out the FastPath option to the above cards. With this setup, you can expect no more than 2,500MB/S maximum throughput on PCIe2 and around 4,000MB/S on PCIe3. I haven't tested IOPS on these cards, but I'd expect 200K to 400K maximum. Actual swap performance will be lower, of course. I'd be very surprised if the swap code was optimized to drive storage as hard as would be required to tax these cards. In any case, you'd see far better swap performance than before.
I'm quite interested in this scenario. The system has 32 GB of ram and no swap at all at the moment; the SSDs would hold the entire swap so it's not an issue to prioritize it.The alternative is to trade the expense and complexity of a RAID card for a far cheaper host bus adapter and let the OS to the work. This might also provide better performance than a RAID card.
Both Linux and Windows support multiple swap files/partitions, and with the right configuration, the operating system "stripes" across all swap storage like a RAID.
In Windows, just add a few SSD drives and then add a swap file to each. With eight 200GB swap files on eight SSD drives, you'll have more swap IO than anyone in history.
In Linux, you can add multiple swap partitions, but there is some magic required to allow the OS to use all of them at the same time. Each partition has to have a priority set, the priority of all of your SSD-based swap partitions must be the same, and the priority for the SSD-based swap partitions must be the highest priority of all of your swap priorities. Get it wrong and you get sequential swap files, not swap file striping.
Indeed that's a good idea to try out the swapping to multiple partitions trick.RAID or HBA - you won't know which will provide better performance unless you test. Consider doing a mini-test by moving swap to one or two SSD drives using existing SAS or SATA ports - no HBA or RAID card expense yet. Don't just add the SSDs to your swap configuration, replace your existing swap with SSD.
I think that will turn out to be too expensive for now. However, I may have misunderstood you: what do you mean by "ram server"? Do you have a link to such a thing? Or are you talking about an actual server machine with a mobo supporting up to 512 GB? (those are beyond our reach right now).You will also want to tune vm.overcommit_memory and swappiness parameters if you are on Linux. I would also investigate huge pages on your system. Lastly, for the future, it might be worth looking at software for distributed shared memory. With such software, plus two Infiniband cards and a cable from eBay, you could link two 512GB RAM servers (using cheap 16GB RAM sticks) into a 1TB monster.
I would love to see some performance results showing 9271 h/w raid and 9207 mdadm raid.As for your other questions:
- Both the Samsung 840 pro and Vertex4 drives are very new. It'll be some time before we know how they perform in server environments. I was very excited about the Vertex4, but I've seen some quirky results and so have not invested.
- RAID0 is pretty simple and there won't be a big throughput difference between 9207 hadware RAID (using IR firmware), 9207 mdadm RAID (IT or IR firmware), and 9271 hardware RAID. For smaller transfer sizes, there may be differences.
- According to my tests, there is no performance difference between IR and IT firmware when you aren't using the RAID functionality.
Hi dba,
...I'm quite interested in this scenario. The system has 32 GB of ram and no swap at all at the moment; the SSDs would hold the entire swap so it's not an issue to prioritize it
I had no idea that Windows stripes across multiple swap partitions automatically and that Linux would do the same if the swap partition priorities are equal. Are you really sure about this? I was under the impression that both OSs would use a 2nd swap files/partitions only if the 1st one filled up. In other words, would sequentially write to them. I am now very curious to test this and find out.
Cheers
Sorry, I can't remember the exact Linux tools that I used. I remember using a built-in Linux script to gather the raw data and then sar or something to analyze it. For Windows, I once read a Microsoft report that broke down swap usage patterns across several thousand servers to show the breakdown between small and large block transfers for both reads and writes - they had to get that data somehow and it was probably WPM. I haven't needed to dig into that level of detail.Hi dba,
Many thanks for taking the time to reply with such a thorough and informative answer. I very much appreciate it. I have a few more queries below if you could waste some more time.
Your assumption is correct. Matlab is the application in question and I was actually just thinking of investigating the structure of memory data that it uses when swapping in order to choose the best SSD. Can you recommend any such tools (preferably for Linux but Windows would be fine too)?...
Cheers
Because your use case is 100% OS swapping, and that swapping is generated by one specific application, you are in an very unusual position. I don't think that you can rely on any existing benchmarks and will have to run your own tests. I'd try multiplexed swap first because it's cheaper, simpler, and easier to implement.Hi dba,
...I would love to see some performance results showing 9271 h/w raid and 9207 mdadm raid.
From your experience or gut feeling, how would you rank the following in terms of swap throughput and IOPS
a) 9207 mdadm raid0,
b) 9207 IR raid0,
c) 9271 h/w raid0 and
d) 9207 with individual swap partitions
Cheers
If you have a nice server motherboard with 32 RAM slots, then you can get 1TB of RAM by buying 32GB RAM sticks. Unfortunately, 32GB modules are $900 each and that's $30K in RAM.Hi dba,
I think that will turn out to be too expensive for now. However, I may have misunderstood you: what do you mean by "ram server"? Do you have a link to such a thing? Or are you talking about an actual server machine with a mobo supporting up to 512 GB? (those are beyond our reach right now).
Cheers