Anyone with 4 x Samsung 840 PRO's on RAID5 with LSI Card?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Hello,

I've searched a lot but could not find any benchmarks related to a setup that has 4 x Samsung 840 PRO's on RAID 5 array with a LSI 9265/9266/9271 card.

Anyone wants to share their benchmarks?

Thanks
I'm getting really uneven results.



4x Samsung 840 Pro 256GB
4x WD Velociraptor 1TB
LSI 9271-8iCC

I could ultimately give a shit about what AS SSD says, but I used to xfer a 1.3GB Maya file @ 800MB/s from a dual OWC SSD RAID-0. Now I barely break 300MB/s on the same file no matter how many times I try it.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
I'm getting really uneven results.

4x Samsung 840 Pro 256GB
4x WD Velociraptor 1TB
LSI 9271-8iCC

I could ultimately give a shit about what AS SSD says, but I used to xfer a 1.3GB Maya file @ 800MB/s from a dual OWC SSD RAID-0. Now I barely break 300MB/s on the same file no matter how many times I try it.
Secure erase the disks on a Mobo sata port, then make your RAID0 array, benchies perform best with a clean drive, you are probably seeing a real life performance once the disks are used for a while.
 
Last edited:

Andreas

Member
Aug 21, 2012
127
1
18
I checked 16x 840 Pro 256 GB with the Adaptec 71605E (HBA) and IOMeter.
This is not via 16x RAW devices aggregated, but via the filesystem (1 logical raid0 volume by the OS). Looks like a current upper bound on PCI 3.0 x8 performance.

6,8 GB/sec read
6,5 GB/sec write

To facilitate frequent reconfigurations in my "lab" environment, I am building 16-packs of SSDs in an el-cheapo fashion. Not yet finished when the pic was taken


On secure erase:
I have quite a few SSDs which could not be restored to full write performance with secure erase. Many worked, but 10% of my Samsung 830 didn't recover.
Second recommendation: If somebody plan to create 2 raid sets or more with SSDs, I'd recommend to put those with similar performance together. There is enough sample variation to check for this aspect to maintain better write performance (lowest speed drive defines performance of the raid)

Andy
 
Last edited:

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Is Fastpath already enabled?
I use 2x 840 Pro 128GB and can get a better result
I hear ya, this is really disappointing.

All of the Advanced Software Options show as enabled, including FastPath.

Disk Write cache shows as enabled on both VDs:

c:\>MegaCLI -LDGetProp -DskCache -LALL -aALL


Adapter 0-VD 0(target id: 0): Disk Write Cache : Enabled
Adapter 0-VD 1(target id: 1): Disk Write Cache : Enabled


Exit Code: 0x00
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Secure erase the disks on a Mobo sata port, then make your RAID0 array, benchies perform best with a clean drive, you are probably seeing a real life performance once the disks are used for a while.
Real life performance is really all I care about. My old file server had a very basic software RAID-0 made up of two 120GB OWC Mercury Extreme Pro 6G drives and I could copy 1GB files at 800MB/s both locally and across my IB network. I just tried copying the same test file from the cachecade to c:/ (pair of 6G RAID-0 SSDs) and it barely broke 300MB/s.

I'll format the drives and start fresh then benchmarks to see where the problems lie.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
6.8GB/S - that's a record! Any problems with the Adaptec so far? If not then it may overtake the 9202 as my favorite HBA.

I checked 16x 840 Pro 256 GB with the Adaptec 71605E (HBA) and IOMeter.
This is not via 16x RAW devices aggregated, but via the filesystem (1 logical raid0 volume by the OS). Looks like a current upper bound on PCI 3.0 x8 performance.

6,8 GB/sec read
6,5 GB/sec write

To facilitate frequent reconfigurations in my "lab" environment, I am building 16-packs of SSDs in an el-cheapo fashion. Not yet finished when the pic was taken


On secure erase:
I have quite a few SSDs which could not be restored to full write performance with secure erase. Many worked, but 10% of my Samsung 830 didn't recover.
Second recommendation: If somebody plan to create 2 raid sets or more with SSDs, I'd recommend to put those with similar performance together. There is enough sample variation to check for this aspect to maintain better write performance (lowest speed drive defines performance of the raid)

Andy
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
I checked 16x 840 Pro 256 GB with the Adaptec 71605E (HBA) and IOMeter.
This is not via 16x RAW devices aggregated, but via the filesystem (1 logical raid0 volume by the OS). Looks like a current upper bound on PCI 3.0 x8 performance.

6,8 GB/sec read
6,5 GB/sec write
SICK!

On secure erase:
I have quite a few SSDs which could not be restored to full write performance with secure erase. Many worked, but 10% of my Samsung 830 didn't recover.
Second recommendation: If somebody plan to create 2 raid sets or more with SSDs, I'd recommend to put those with similar performance together. There is enough sample variation to check for this aspect to maintain better write performance (lowest speed drive defines performance of the raid)
That's a good tip about testing each individually and ranking them. Thanks.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
wow need main site post comparing.

How is compatibility on them? Linux/ VMW/ Xen? Do you have one to review?

BTW is this OT here or?
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Zero'd drives, rebuilt fresh RAID-0 array on LSI 9271-8iCC

Strip size is 256 (not sure if that's optimal), and Always Read Through and Write back are enabled.

*EDIT* I also tried it with No Read Ahead and it made no statistically meaningful change.



Much better, I think.

Up next i'll re-zero them again and re-attach as cachecade. I'll do a single disk first, benchmark, and then see how it scales to 4.
 
Last edited:

Andreas

Member
Aug 21, 2012
127
1
18
6.8GB/S - that's a record! Any problems with the Adaptec so far? If not then it may overtake the 9202 as my favorite HBA.
I have this one only shortly - until now ok. If I remember correctly, streaming from SSDs was with my 82405 (24 port version with RAID engine) also fine, but with random I/O, performance was significantly lower than LSI 9207-8i and the driver (last october) was less stable than its LSI counterpart. I have a second Adaptec 71605E on order, as the power consumption is lower and it is the only 6GB/s 16-port PCI 3.0 adapter for a reasonable price.

Currently, I run the 24 drives in a combination of 1 Adaptec and 1 LSI. Later this week I will probaly fill this up to 32 drives and change for this project to an all Adaptec build. My application is reading off the combined stripe set (Adaptec/LSI) with 10.2 GB/sec, which is nice for only 2 PCI slots occupied. With 2 Adaptec I expect to hit between 12.5 to 13 GB/sec, I'll see. Unfortunately LSI isn't shipping a 16 port version of the 9207-8i which I am very satisfied with. The only "culprit" is the 8port limitation, leaving a relatively large part of the available PCI 3.0 x8 bandwidth on the table.

in operation. Adaptec is using HD plugs and thin cables (silver), the red cables are from the LSI controller


rgds,
Andy
 

klree

Member
Mar 28, 2013
58
0
6
Even when the feature enabled, are you using the following for your VD?
Standard volume policy for FastPath Software
Write Policy: Write Through
IO Policy: Direct IO
Read Policy: No Read Ahead
Stripe Size: 64KB
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Even when the feature enabled, are you using the following for your VD?
Standard volume policy for FastPath Software
Write Policy: Write Through
IO Policy: Direct IO
Read Policy: No Read Ahead
Stripe Size: 64KB
I had always write back enabled and stripe of 512... I just rebuilt the VD with the recommended FastPath settings, and umm, well:

 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
you need to enable disk cache policy dude. This requires using megascu to alter the settings that block this. Or input the test key. You know the free trial 30 day fastpath key, you can enable endlessly without having to lose any volumes? Someone did a script to run every 20 days to load the key with megascu/megacli - if you enable the fast path demo key it will let you change the policy.

I'd suggest you do it, it is the essence of the drive wear leveling functioning properly.

the battery caused massive problems, and so do the oem controllers which the 840 pro's do not tolerate (OEM specific commands).
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
you need to enable disk cache policy dude. This requires using megascu to alter the settings that block this. Or input the test key. You know the free trial 30 day fastpath key, you can enable endlessly without having to lose any volumes? Someone did a script to run every 20 days to load the key with megascu/megacli - if you enable the fast path demo key it will let you change the policy.

I'd suggest you do it, it is the essence of the drive wear leveling functioning properly.

the battery caused massive problems, and so do the oem controllers which the 840 pro's do not tolerate (OEM specific commands).
Thanks, bro. I was only made aware of this glaring incompatibility a month after I got my 9271 and 840Ps...

All of that hacking and tweaking really doesn't sit well with me after I just spent $750 on the controller.

I think I'll sell the 9271 and get an Adaptec 16 port HBA like Andreas is using.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
new reviews coming soon:

HPVSA - software Smart Array for redhat/windows/ESXi 5.

turn that frown (B120i - aka Intel C600 6 port SATA) upside down into a real smart array! With 512FBWC and raid-5!

Turn that frown (B320i - aka LSI 2308 8 port SAS on riser slot) upside down into a real Smart Array! with 512FBWC and raid-5!

Think about this for a second, that's a REAL ESXi driver that turns SATA 6 port intel C600 raid into a smartarray (ACU etc).
That's a MPT_SAS or HPVSA mode riser card in that dl380e/dl360e. Solaris users are forced to use ZFS and IT MODE with MPT_SAS for your drives. DARN!

And lastly the PMC Adaptec SR8v6gbps - P420/1GB FBWC steal of the century, which has cachecade 1.0 (hp's read only caching version) with a twist! 750GB of SSD with 1GB FBWC memory, 1.5TB with 2GB FBWC (possible to disable write caching to the hard drives, not recommended and use another 500GB of SSD but that's kinda stupid unless you are doing.... SSD caching of SSD!?!?!).

Those slow 960gb crucial M500's got you down? Turn that frown upside down with 750gb to 2TB of read caching to some Samsung 840 PRO(unused portions are usable for boot!) Free trial key! If you know how HP does keys, you'll really appreciate the key type. "PER SERVER". Like ILO2/ILO3. Of course you would always do the honorable and buy a key per server, as it would not be legal to type that SAAP 2.0 key into more than 1 server. Unlike LSI and their stupid SAFEID and web-transfer system, HP appreciates your MORAL Terpitude and assumes you would prefer to be able to TRUE-UP rather than use draconian DRM that could lock you out of your raid (cachecade, dead controller)..

then again, i think everyone knows that they can input the LSI trial key every day and extend the free trial eternally. But due to great morals, nobody would ever create a scheduled task to do that. nope. that would be bad! do not try it at home.

So HPVSA - can we make it work with regular INTEL C600 sata ports? inquiring minds using ESXi 5 would like to know. Can we use it with regular LSI2308 ? Why did it take so long to get a software raid driver for the fastest SSD RAID possible (C600 ports direct to cpu)?

Ignore Software Designed Networking, if you are hip, you will know that Software Designed Storage is where it's at. We already have that DVSWITCH, I think we all want that DVSAN/DVNAS/DVDAS.
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
I decided to compare two sets of drives in RAID-0 on my LSI 9271-8i-CC:

4x Samsung 840 Pro SSDs (540MB/s - 520MB/s - 100k IOPS / 90k IOPS)
4 x Intel 520 180GB drives (550MB/s / 520MB/s - 50k IOPS / 80k IOPS).

Win 2008R2, both clean, No Read Ahead, Write Through, Direct IO, Stripe=64, on LSI 9271-8i-CC::


 

Andreas

Member
Aug 21, 2012
127
1
18
1526/1395 MB/sec sequential rate seems to be low for a 4x Samsung setup.
What is the IOMeter result with 4MB block size? Might be an Anvil issue.
 

Andreas

Member
Aug 21, 2012
127
1
18
Just installed the second Adaptec 71605E controller. Splitted the SSDs to 12 each per controller to better balance the performance of the PCIe 3.0 interconnect with the aggregated throughput of 12 SATA channels.

Setup: Each card created a 12SSD raid0, combined by the OS. IOMeter works against one OS volume

read: (491 MB/sec per SSD)


write: (460 MB/sec per SSD)


rgds,
Andy
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
1526/1395 MB/sec sequential rate seems to be low for a 4x Samsung setup.
What is the IOMeter result with 4MB block size? Might be an Anvil issue.
Thanks, Andreas. I'll give it a try this weekend and report back. What was interesting to me is that the samsung still did much better than teh sandforce Intel in spite of them being direct competitors and this being on an LSI 9271-8i.

p.s. your 16 drive setup is amazing. Nice DIY rack for them. What are you going to use them for btw?