Setting up an all SSD server using Micron's 5210 Ion 7.68's - using an AOC-S3108L-H8iR - and I'm getting absolute trash for write performance using DiskSpd/CrystalDisk 'real' world performance.
For starters, in JBOD I'm getting perfectly acceptable results with the same test:
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes
[Read]
Sequential 1MiB (Q= 1, T= 1): 317.481 MB/s [ 302.8 IOPS] < 3301.36 us>
Random 4KiB (Q= 1, T= 1): 21.717 MB/s [ 5302.0 IOPS] < 188.23 us>
[Write]
Sequential 1MiB (Q= 1, T= 1): 338.690 MB/s [ 323.0 IOPS] < 3093.16 us>
Random 4KiB (Q= 1, T= 1): 61.783 MB/s [ 15083.7 IOPS] < 65.93 us>
[Mix] Read 70%/Write 30%
Sequential 1MiB (Q= 1, T= 1): 295.500 MB/s [ 281.8 IOPS] < 3541.71 us>
Random 4KiB (Q= 1, T= 1): 20.417 MB/s [ 4984.6 IOPS] < 200.04 us>
Profile: Real
Test: 64 GiB (x2) <0Fill> [Interval: 5 sec] <DefaultAffinity=DISABLED>
OS: Windows Server 2019 [10.0 Build 17763] (x64)
------------------------------------------------------------------------------
But when I try it in any other configuration - Storage Spaces was tried, but that's never done good on Parity - other than Raid-0, it is absolute trash.
An example: Raid5+0 with a pair of 12-disk raid-5's - no read ahead / write through configuration
------------------------------------------------------------------------------
[Read]
Sequential 1MiB (Q= 1, T= 1): 757.525 MB/s [ 722.4 IOPS] < 1383.17 us>
Random 4KiB (Q= 1, T= 1): 20.895 MB/s [ 5101.3 IOPS] < 195.63 us>
[Write]
Sequential 1MiB (Q= 1, T= 1): 109.678 MB/s [ 104.6 IOPS] < 9549.54 us>
Random 4KiB (Q= 1, T= 1): 11.595 MB/s [ 2830.8 IOPS] < 352.57 us>
[Mix] Read 70%/Write 30%
Sequential 1MiB (Q= 1, T= 1): 405.627 MB/s [ 386.8 IOPS] < 2582.32 us>
Random 4KiB (Q= 1, T= 1): 15.446 MB/s [ 3771.0 IOPS] < 264.59 us>
Profile: Real
Test: 64 GiB (x2) <0Fill> [Interval: 5 sec] <DefaultAffinity=DISABLED>
OS: Windows Server 2019 [10.0 Build 17763] (x64)
------------------------------------------------------------------------------
So the only thing I can think of is to use a raid-0 with a hot spare and trust in the SSD Protection feature that Avago/LSI/MegaRaid cards have, which is supposed to swap in a good SSD into a raid-0 before utter failure.
I've tried raid-5 FastPath as well, with no better results. That really boggles my mind as FastPath in a Mirror works great. I understand Parity calculations lower performance, but I can get a better sequential write using a raid-50 of NL-SAS 7200 rpm HDDs with more capacity, in a 2U system. The random IOPS aren't as good it's true, but there's no reason I can think of that the parity calculation on any modern RAID card should impact performance that much.
I've used raid-5 on SAS-SSDs with Dell servers using basically the same cards - same config of write through / no read ahead - and get much better performance despite only using 6xSSD, this is 24 - the amount of additional write paths should scale.
Thoughts? Am I expecting too high performance on a 64GiB file in Parity?
Use case: Veeam Repository (thus my use of large file size) - in real world use would it use more than a Q1/T1 for writes?
For starters, in JBOD I'm getting perfectly acceptable results with the same test:
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes
[Read]
Sequential 1MiB (Q= 1, T= 1): 317.481 MB/s [ 302.8 IOPS] < 3301.36 us>
Random 4KiB (Q= 1, T= 1): 21.717 MB/s [ 5302.0 IOPS] < 188.23 us>
[Write]
Sequential 1MiB (Q= 1, T= 1): 338.690 MB/s [ 323.0 IOPS] < 3093.16 us>
Random 4KiB (Q= 1, T= 1): 61.783 MB/s [ 15083.7 IOPS] < 65.93 us>
[Mix] Read 70%/Write 30%
Sequential 1MiB (Q= 1, T= 1): 295.500 MB/s [ 281.8 IOPS] < 3541.71 us>
Random 4KiB (Q= 1, T= 1): 20.417 MB/s [ 4984.6 IOPS] < 200.04 us>
Profile: Real
Test: 64 GiB (x2) <0Fill> [Interval: 5 sec] <DefaultAffinity=DISABLED>
OS: Windows Server 2019 [10.0 Build 17763] (x64)
------------------------------------------------------------------------------
But when I try it in any other configuration - Storage Spaces was tried, but that's never done good on Parity - other than Raid-0, it is absolute trash.
An example: Raid5+0 with a pair of 12-disk raid-5's - no read ahead / write through configuration
------------------------------------------------------------------------------
[Read]
Sequential 1MiB (Q= 1, T= 1): 757.525 MB/s [ 722.4 IOPS] < 1383.17 us>
Random 4KiB (Q= 1, T= 1): 20.895 MB/s [ 5101.3 IOPS] < 195.63 us>
[Write]
Sequential 1MiB (Q= 1, T= 1): 109.678 MB/s [ 104.6 IOPS] < 9549.54 us>
Random 4KiB (Q= 1, T= 1): 11.595 MB/s [ 2830.8 IOPS] < 352.57 us>
[Mix] Read 70%/Write 30%
Sequential 1MiB (Q= 1, T= 1): 405.627 MB/s [ 386.8 IOPS] < 2582.32 us>
Random 4KiB (Q= 1, T= 1): 15.446 MB/s [ 3771.0 IOPS] < 264.59 us>
Profile: Real
Test: 64 GiB (x2) <0Fill> [Interval: 5 sec] <DefaultAffinity=DISABLED>
OS: Windows Server 2019 [10.0 Build 17763] (x64)
------------------------------------------------------------------------------
So the only thing I can think of is to use a raid-0 with a hot spare and trust in the SSD Protection feature that Avago/LSI/MegaRaid cards have, which is supposed to swap in a good SSD into a raid-0 before utter failure.
I've tried raid-5 FastPath as well, with no better results. That really boggles my mind as FastPath in a Mirror works great. I understand Parity calculations lower performance, but I can get a better sequential write using a raid-50 of NL-SAS 7200 rpm HDDs with more capacity, in a 2U system. The random IOPS aren't as good it's true, but there's no reason I can think of that the parity calculation on any modern RAID card should impact performance that much.
I've used raid-5 on SAS-SSDs with Dell servers using basically the same cards - same config of write through / no read ahead - and get much better performance despite only using 6xSSD, this is 24 - the amount of additional write paths should scale.
Thoughts? Am I expecting too high performance on a 64GiB file in Parity?
Use case: Veeam Repository (thus my use of large file size) - in real world use would it use more than a Q1/T1 for writes?