crappy SSD raid performance - suggestions?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

grfxlab

Member
Apr 6, 2016
62
12
8
54
Hey everyone, I am looking for suggestions to a problem I am having. I have a single Adaptec 72405 24 port raid card directly connected to 14 Samsung SV843 960GB SSD.
Looking at Adpatec's literature (google series7_performance_wp.pdf) the card can deliver almost 5,800 MB/s read and 1,800 MB/s writes for 1MB sequential using 24 SATA SSD's.
Since I have half the number of drives I hoped to get half the performance but...
my current read/write speed local for 1MB sequential files shows 1,640MB/s read/ 136MB/s write. Over 10Gbe network I get max of 718 MB/s read/ 98MB/s write. I want to get significantly more.

I know raid 10 should be faster but I want as much storage space as possible. The read speeds are good enough but the writes are terrible. Any suggestions to get it up to the >600MB/s ?
 

vanfawx

Active Member
Jan 4, 2015
365
67
28
45
Vancouver, Canada
Most hardware RAID cards have an option to disable the write cache of the card and pass reads directly through to the SSD's. For LSI MegaRAID, it's called "FastPath". If there's an equivalent for Adaptec, you should enable it and try testing again.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Don't think the adaptec cards have an equivalent, but the setting on the ones I've used sits under general controller settings IIRC - there's a per-drive write cache option in there where you can turn the write cache back on.
 

vanfawx

Active Member
Jan 4, 2015
365
67
28
45
Vancouver, Canada
I guess you can fake it by setting the controller cache to "Write-Through" and reads to "Direct", and enable the write cache of the SSD's themselves. I know on LSI they recommend a stripe size of 64kb, which might be worth trying on this controller along with the other changes.
 

grfxlab

Member
Apr 6, 2016
62
12
8
54
Thank you for the suggestions. I set up the raid using straight default settings. Transfer rate is 6Gb/s, Write-Cache is set to off (write through). Oddly the block size on each drive is set to 512 Bytes while the logical device says it can be 512kb or 4k. I will have to look that over to double check the settings.
 

grfxlab

Member
Apr 6, 2016
62
12
8
54
Ok. Over the weekend I added another drive to the raid 5 set and changed the stripe size to 256K (the max size available). Unfortunately I don't know what the stripe size was before but know it was small. The 512 Byte block size is fixed for the SSD drives. I tested speeds before and after using ATTO. After the expansion and stripe size change performance for writes of larger files increased significantly but smaller files are still lower than a HDD raid and way below a single SSD. I also tried a 3-drive raid 0. It improved the low end file transfers. I will try a 6 disk raid 0 after I get a couple more drives. If that improves significantly I will convert my raid 5 to a raid 10. Atto results below for those interested. The first image is disk performance of the 17-drive Raid 5 (Samsung SV843 960GB SSD) with original small stripe.
17drive_raid5_4K_stripe.jpg

This second image is actually an 18-drive raid 5 with a 256K stripe.17drive_raid5_256Kstripe.jpg

This last image is a 3-drive raid 0 with the same SV843 960GB drives.
3drive_raid0.jpg
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Did you check the "performance" settings?

I can't remeber if it was a per-arrray or per-system/controller configuration, but you could choose from OLTP (every io is transformed/handled like 8k io), dynamic (big block bypas or oltp/8k io) or big block bypass.
My 6805 used per default oltp which resulted in low performance in a fileserver, especially with small io.
 

grfxlab

Member
Apr 6, 2016
62
12
8
54
Unfortunately the results are identical when the performance setting is set to dynamic or big block bypass. I have not tried OLTP. Good Suggestion though.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Not sure its relevant on a hardware raid card but how does the CPU load look like?
 

grfxlab

Member
Apr 6, 2016
62
12
8
54
CPU, RAM, are 6-8%.

Teste 3, 4, and 5 drive Raid 0. Similar results under 64K file size writes. Only after that does it scale significantly. Either way a raid 0 (would be raid 10) has a 10X write gain for files under 2MB. Reads are only 20% faster. (18 drive raid 5 vs 5 drive raid 0).

Further testing revealed these drives are just not great in the lower range. See single drive comparrison of SV843 vs 850 EVO below. (hooking up the SV843 direct to motherboard SATA was slightly slower).
I am trying to figure out if the cost of going raid 10 with these drives is worth the benefit. Hard to believe that 5 drives in raid 0 are 5 times slower than a single 850 EVO on files smaller than 2K but it takes a fraction of a second to write those and our work load in rendered images. Not too much but texture maps that small and those are mostly reads. So I might have to live with it. Thanks for the input everyone.

single_SV843.jpg single_850EVO.jpg
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Were those drive new?
Can you do a secure erase or regular wipe on them?
 

grfxlab

Member
Apr 6, 2016
62
12
8
54
The drives are mixed. Early drives were new, As I expanded the raid set I bought used in order to keep the same drive in the set. Secure Erase (ATA) command, vs. secure erase vs. long format = same result as posted.
 

grfxlab

Member
Apr 6, 2016
62
12
8
54
Looking at old reviews of these drives (even from Patrick) it looks like the single drive performance is correct for these. The good news is the WPD/TBW are so big for these that they will last forever.