Hi
I am setting up a RAID for a file server. It is supposed to be a RAID 6 with 9 SSDs and it will replace a RAID 10 with 8 HDDs (7200rpm).
The files we store are pretty small:
The old server reads the files on average wíth 8-10MB/s (real world test-set) and we want it to be quite a bit faster.
RAID 6 Setup:
Tests:
Realworld copy test-set from RAID 10 HDD to Ramdisk: 29 minutes
Realworld copy test-set from SSD RAID 6 to Ramdisk: 04:41 minutes - average approx. 55mb/s
So the new setup is already a LOT faster then the old, thats for sure but quite a lot slower than the benchmarks with 4k to 512kb read would imply:
QD1:

IO QD1:

QD4:

IO QD4:

CrystalDiskmark:

All tests above were done 265kb stripe size, test with 64kb stripe size are within the margin of error identical. FastPath should be enabled with my setting of the VD.
So from the benchmarks I would actually expect to get about 100-200mb/s real-world read performance in my file-mix/test-set scenario at minimum.
From my understanding of RAID a smaller stripe-size, lets say 32 or 16kb, should offer a higher transfer speed as more files can be read from more than one disk (can be striped to more disks) and overhead can be reduced for smaller files since only 32kb need to be rad for a 4kb file instead of 265kb - of course this would increase the IO but since its an SSD it should handle it.
Problem here is that the Broadcom only offers 64kb stripe-size, Areca and Adaptecgo down to 16kb and 4kb respectively - I could get an Adaptec 3102 8i for testing pretty easy, Areca not so easy.
So to all the ppl who know more than I do:
Is there a chance to get more out of the SSD RAID? - i.e. lower stripe-size or more SATA/SAS lanes (16i controller - I can hook up 12 channels to the expander) or is that probably as good as it gets?
thx for any input
I am setting up a RAID for a file server. It is supposed to be a RAID 6 with 9 SSDs and it will replace a RAID 10 with 8 HDDs (7200rpm).
The files we store are pretty small:
- 10% below 32kb
- 5% between 32-64kb
- 60% between 64 - 300kb
- 25% above 300kb
- and like 2 % above 1MB
- files are written once and won't change
The old server reads the files on average wíth 8-10MB/s (real world test-set) and we want it to be quite a bit faster.
RAID 6 Setup:
- Broadcom 9460
- Read ahead: NO
- Write through
- Direct IO: yes
- Disk cache: yes (SSDs have PLP)
- SSDs: Micron ECO 5200 3,84TB
- Windows Server 2019, RAID Volume with ReFS
Tests:
Realworld copy test-set from RAID 10 HDD to Ramdisk: 29 minutes
Realworld copy test-set from SSD RAID 6 to Ramdisk: 04:41 minutes - average approx. 55mb/s
So the new setup is already a LOT faster then the old, thats for sure but quite a lot slower than the benchmarks with 4k to 512kb read would imply:
QD1:

IO QD1:

QD4:

IO QD4:

CrystalDiskmark:

All tests above were done 265kb stripe size, test with 64kb stripe size are within the margin of error identical. FastPath should be enabled with my setting of the VD.
So from the benchmarks I would actually expect to get about 100-200mb/s real-world read performance in my file-mix/test-set scenario at minimum.
From my understanding of RAID a smaller stripe-size, lets say 32 or 16kb, should offer a higher transfer speed as more files can be read from more than one disk (can be striped to more disks) and overhead can be reduced for smaller files since only 32kb need to be rad for a 4kb file instead of 265kb - of course this would increase the IO but since its an SSD it should handle it.
Problem here is that the Broadcom only offers 64kb stripe-size, Areca and Adaptecgo down to 16kb and 4kb respectively - I could get an Adaptec 3102 8i for testing pretty easy, Areca not so easy.
So to all the ppl who know more than I do:
Is there a chance to get more out of the SSD RAID? - i.e. lower stripe-size or more SATA/SAS lanes (16i controller - I can hook up 12 channels to the expander) or is that probably as good as it gets?
thx for any input
Last edited: