Hi,
I am playing around with a bunch of SSDs trying to get a feel for how mdadm is scaling and what the sweat spot will be. This is still part of my everlasting search to find the ultimate shared storage setup for my VMWare boxes...
So I have a new Ubuntu installation, default, no optimizations yet. Just dropped in my drives with the onboard SATA controller on a X10SRA with a Ev2667v4ES (2.9Ghz core freq).
The drives are Intel S3700 400's (some Intel, some Dell, fixed to 6Gbs).
I have run a bunch of 4K centric tests with various drives in the array and have to say it scales extremely bad - basically it hits a limit at 200K IOPS and thats it.
Scaling from 2 to 8 drives in Raid0 (4 threads, iodepth 16, t=60 [I know steady state perf will drop to thats at 30k/drive so should still be more than 200))
de52_s3700_2r0: write: io=16447MB, bw=280683KB/s, iops=70170, runt= 60002msec
de52_s3700_4r0: write: io=32251MB, bw=550390KB/s, iops=137597, runt= 60002msec
de52_s3700_6r0: write: io=42368MB, bw=723056KB/s, iops=180763, runt= 60002msec
de52_s3700_8r0: write: io=46302MB, bw=790214KB/s, iops=197553, runt= 60001msec
I also tried with more jobs (8) and different iodepths, no real improvement.
Just for fun I ran a 2disk Raid0 on Intel 750s - quite inconsistent results (but might be steady state, close after each other)...
de51_nvmer0: write: io=96582MB, bw=1609.7MB/s, iops=412060, runt= 60003msec
de51_nvmer0: write: io=53776MB, bw=917734KB/s, iops=229433, runt= 60003msec
So mdadm *is* able to reach more than 200K IOPS so maybe thats more a Sata controller issue?
Will need to perform the tests on a HBA next...
What is your experience with SSDs with mdadm? Ever hit that limit? Did you fix it?
I am playing around with a bunch of SSDs trying to get a feel for how mdadm is scaling and what the sweat spot will be. This is still part of my everlasting search to find the ultimate shared storage setup for my VMWare boxes...
So I have a new Ubuntu installation, default, no optimizations yet. Just dropped in my drives with the onboard SATA controller on a X10SRA with a Ev2667v4ES (2.9Ghz core freq).
The drives are Intel S3700 400's (some Intel, some Dell, fixed to 6Gbs).
I have run a bunch of 4K centric tests with various drives in the array and have to say it scales extremely bad - basically it hits a limit at 200K IOPS and thats it.
Scaling from 2 to 8 drives in Raid0 (4 threads, iodepth 16, t=60 [I know steady state perf will drop to thats at 30k/drive so should still be more than 200))
de52_s3700_2r0: write: io=16447MB, bw=280683KB/s, iops=70170, runt= 60002msec
de52_s3700_4r0: write: io=32251MB, bw=550390KB/s, iops=137597, runt= 60002msec
de52_s3700_6r0: write: io=42368MB, bw=723056KB/s, iops=180763, runt= 60002msec
de52_s3700_8r0: write: io=46302MB, bw=790214KB/s, iops=197553, runt= 60001msec
I also tried with more jobs (8) and different iodepths, no real improvement.
Just for fun I ran a 2disk Raid0 on Intel 750s - quite inconsistent results (but might be steady state, close after each other)...
de51_nvmer0: write: io=96582MB, bw=1609.7MB/s, iops=412060, runt= 60003msec
de51_nvmer0: write: io=53776MB, bw=917734KB/s, iops=229433, runt= 60003msec
So mdadm *is* able to reach more than 200K IOPS so maybe thats more a Sata controller issue?
Will need to perform the tests on a HBA next...
What is your experience with SSDs with mdadm? Ever hit that limit? Did you fix it?
Attachments
-
1.6 KB Views: 2
-
1.6 KB Views: 0
-
1.6 KB Views: 0
-
1.6 KB Views: 1
-
39.8 KB Views: 0
-
1.6 KB Views: 0