SSD RAID 6 - diskrepancy Benchmarks and real world tests - stripe-size right?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

blublub

Member
Dec 17, 2017
34
2
8
42
Hi

I am setting up a RAID for a file server. It is supposed to be a RAID 6 with 9 SSDs and it will replace a RAID 10 with 8 HDDs (7200rpm).

The files we store are pretty small:
  • 10% below 32kb
  • 5% between 32-64kb
  • 60% between 64 - 300kb
  • 25% above 300kb
  • and like 2 % above 1MB
  • files are written once and won't change

The old server reads the files on average wíth 8-10MB/s (real world test-set) and we want it to be quite a bit faster.

RAID 6 Setup:
  • Broadcom 9460
  • Read ahead: NO
  • Write through
  • Direct IO: yes
  • Disk cache: yes (SSDs have PLP)
  • SSDs: Micron ECO 5200 3,84TB
  • Windows Server 2019, RAID Volume with ReFS

Tests:
Realworld copy test-set from RAID 10 HDD to Ramdisk: 29 minutes

Realworld copy test-set from SSD RAID 6 to Ramdisk: 04:41 minutes - average approx. 55mb/s

So the new setup is already a LOT faster then the old, thats for sure but quite a lot slower than the benchmarks with 4k to 512kb read would imply:

QD1:

upload_2019-10-3_21-35-41.png


IO QD1:
upload_2019-10-3_21-47-0.png


QD4:
upload_2019-10-3_21-36-53.png

IO QD4:
upload_2019-10-3_21-48-0.png


CrystalDiskmark:
upload_2019-10-3_21-38-3.png

All tests above were done 265kb stripe size, test with 64kb stripe size are within the margin of error identical. FastPath should be enabled with my setting of the VD.

So from the benchmarks I would actually expect to get about 100-200mb/s real-world read performance in my file-mix/test-set scenario at minimum.

From my understanding of RAID a smaller stripe-size, lets say 32 or 16kb, should offer a higher transfer speed as more files can be read from more than one disk (can be striped to more disks) and overhead can be reduced for smaller files since only 32kb need to be rad for a 4kb file instead of 265kb - of course this would increase the IO but since its an SSD it should handle it.

Problem here is that the Broadcom only offers 64kb stripe-size, Areca and Adaptecgo down to 16kb and 4kb respectively - I could get an Adaptec 3102 8i for testing pretty easy, Areca not so easy.

So to all the ppl who know more than I do:
Is there a chance to get more out of the SSD RAID? - i.e. lower stripe-size or more SATA/SAS lanes (16i controller - I can hook up 12 channels to the expander) or is that probably as good as it gets?

thx for any input :)
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
use the bbwc with the ssd's fastpath doesn't help raid-6. i'd also try raid-5 since the chances of drives failing are slim to none. Strip size should be equal to ssd page size. 4K or 16K depending on the drive to minimize write amplification!
 

blublub

Member
Dec 17, 2017
34
2
8
42
use the bbwc with the ssd's fastpath doesn't help raid-6. i'd also try raid-5 since the chances of drives failing are slim to none. Strip size should be equal to ssd page size. 4K or 16K depending on the drive to minimize write amplification!
Hi, thx for your reply.

How do I find out the page size of my SSDs? - if it's 4k I can't set the corresponding stripe size anyway as the lowest in Broadcom is 64k and Adaptec 16k.

Going from Raid 6 to 5 is probably not going to bring much as it will only speed up reads from files which are large enough to be striped over multiple disks - so 200 or 300kb and up.
From the benchmarks the small files are killing my performance - addionally I appreciate the extra redundancy
 

blublub

Member
Dec 17, 2017
34
2
8
42
I'd suggest testing with Raid 50 - 3x 3-disk Raid5. Lots of BBWC is still a must for Raid5/6 on hw raid controllers.
Well going 3x RAID isn't a financial viable option.
There are some reviews that BBWC isn't helpfu in Ssd based raid however the point in my case is that writes are a non issues - I want to improve random read speed for files below 200 to 300kb in size. From 300kb on random read in real world tests is way over 200mb/s, which is fine.
 

blublub

Member
Dec 17, 2017
34
2
8
42
I'd suggest testing with Raid 50 - 3x 3-disk Raid5. Lots of BBWC is still a must for Raid5/6 on hw raid controllers.
I tried Raid 50, its the same performance in my real-world copy test - however ATTO is 50 to 100% improvement - it just doesn't ranslate into real-world performance.

Its either the drives can't go any faster even if working on striped files with 64kb stripe size or it is some IO/bandwidth issue with only having 8 sas channel, sucks..
 
Last edited:

blublub

Member
Dec 17, 2017
34
2
8
42
strip-size = per drive
stripe-size = per array
Hi I was talking about strip size.

I meanwhile replaced the Braodcom with an Adaptec controller and a srip size of 16kb gave the best overall performance for the file sizes which are most common.
I also moved to IO-Meter for benchmarking as the synthetics weren't helping much and my windows copy test was BS as it is really slow and doesn't scale at all.

I think with the budget I have it is a pretty good trade off between speed, reliability and capacity.
Currently I am benchmarking my workload in a Hyper-V environment but the results in I/O is disappointing so that this workload will be bare-metal installed (12-30% drop in VM as VHDX or passthrough disk)
 
Last edited: