SSD RAID Level Performance Testing

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
I've finally collected enough enterprise grade SSDs to do some meaningful performance testing across different parity levels and platforms. Particularly with ZFS but also with hardware controllers. For virtualization storage, most people recommend RAID 10 (or striped mirrors) to achieve the best overall performance but other benchmarks seem to indicate that may not always be the case.

Here is what I intend to test. Let me know if there are other benchmarks that may be worthwhile.
I also intend to test different storage protocols. e.g. iSCSI, NFS and SMB at each parity level

Test 0 - Baseline of each drive type on its own

Test 1 - (4 ) and (5) SSDs
Software Raid ZFS - Striped, Mirrored Pairs, Raid Z1
Hardware Raid - Raid 0, Raid 10, Raid 5

Test 2 - (6) and (8) SSDs
Same as above plus RaidZ2

Test 3 - (12-13) SSDs
Same as above with mixed drive sizes

Here are the drives and HBAs I'll be using:
(4) Samsung 853t 960GB
(4) HGST 400GB SAS
(2) Dell (SanDisk) Enterprise 960 GB
(3) Toshiba Enterprise HK3R2 960GB

HBA -
(3) LSI-2008-8I IT Mode
(1) LSI-3008-8I IT Mode
(1) LSI 9260-8I Raid
I will not be using any expanders and the CP2600 dual E5-2670 should provide plenty of PCIe bandwidth and horsepower.

Here is some of the baseline single-drive testing
Samsung 853t (review)


Dell (SanDisk) 960


Toshiba 960 (review)


HGST SAS 400GB (review)
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
- Can you please also note if you use compression as it changes the #s too. (Sometimes for better)
- Are all drives secure erased / full formatted prior to testing? If not then def. do this please :)
- Do you plan to just copy a few GB over or run them for a couple hours on each test to put into steady state?
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
- Can you please also note if you use compression as it changes the #s too. (Sometimes for better)
- Are all drives secure erased / full formatted prior to testing? If not then def. do this please :)
- Do you plan to just copy a few GB over or run them for a couple hours on each test to put into steady state?
I do have lz4 compression on as it seems to be a very low cost default on zfs. All of the drives are secure erased and I usually give each configuration 12-24 hours to format (eager zero in VMware). I doubt I'll be able to do hours of testing for every configuration but I can certainly do the primary 2 or 3. Is there a particular multi hour test that you recommend for VM workload testing?
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I do have lz4 compression on as it seems to be a very low cost default on zfs. All of the drives are secure erased and I usually give each configuration 12-24 hours to format (eager zero in VMware). I doubt I'll be able to do hours of testing for every configuration but I can certainly do the primary 2 or 3. Is there a particular multi hour test that you recommend for VM workload testing?
I think @Patrick posted a configuration file for iometer that may be worth checking into, something with random reads and random writes mixed throughout. Since you're testing for VM performance I'm going to assume you want to know how they'll perform in real-world or rather a system with numerous VMs utilizing the pool, so even if they're a handful of relatively low usage they're still all using it at once to some degree so the drives will eventually go to steady-state and I think that's where it's good to see what does what in terms of performance. iometer should let you record and simulate simultaneous access to put it closer to what to expect, this is my understanding of it at-least.

Def. also need to know if sync is on or off or if you plan to do with both, and then with or without a separate SLOG device too?
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Thanks. I'll look for that. VMware also has a tool that uses IOmeter to test workloads that I'll check out. I/O Analyzer. I don't want to make this too complicated or with so many iterations that I never finish it.

I wouldn't normally use a slog with an all SSD array, but I'm frequently wrong, so I can test one. Sync I plan to leave set to standard, so always on for NFS and off for iscsi (except for metadata). These are all drives with power loss protection.
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Here is the first round of testing. First of all, all of this stuff is a hobby for me - not a job. I do not claim to be an expert at any of this its entirely possible that I'm doing everything wrong. So please take it for what its worth.

Test setup is a new VM host that I'm putting together with all flash storage in a quest to idle under 100W. The motherboard is a Supermicro X10SRH with a Xeon E5-2667 v4 and 64GB RAM. It has a fresh install of esxi 6.0 u2 and one 853t is connected to the motherboard where all of the VM boot drives live. All of the testing is done through other HBAs as noted.

I've been researching ways to bench these different raid configurations that is more useful than the typical crystal disk or atto and I'm currently using IOMeter with a configuration intended to test VM workloads. You can read more about the IOMeter settings I'm using here. I've also been using the VMWare I/O Analyzer, which you can read about here. It generates a *lot* of data but in a less user friendly way, so I haven't shown those results here but if anyone has experience with that tool (or IOMeter) and would like to suggest more useful or relevant tests, I'm all ears.

Test 1: JBOD
For the first test I connected 1 HGST 400 GB SAS SSD Drive directly to VMWare via LSI-3008 HBA in IT mode. A disk was created, formatted with eager zeros and presented to windows where I did the tests. All of the tests are done with vhds in this way. (I will do SMB and NFS differently but haven't gotten there yet.) IOMeter was also run within the same Windows VM.

Here is how the HGST 400 performed in JBOD. Very similar to how the drive performed when connected directly to windows (test shown in forst post)


Here is the same drive with the IOMeter test


And just for comparison, here is one of the results from IOAnalyzer, using their "workstation" preset.


Test 2: ZFS Mirrored/Stripe (RAID10 Equivalant)
For the second test, I passed the HBA through to FreeNAS 9.10 and created a 2x2 mirrored stripe (comparable to RAID 10) and connected to ESXi via iSCSI over vmxnet3 10gb virtual networking (no physical NICs are connected). FreeNAS has 16GB of RAM, 2 vCPUs and there are no other volumes for it to manage. LZ4 compression is enabled, dedupe and encryption are off. I did notice that synthetic reads and writes were much faster if it was not random data being tested. I assume this is due to compression so all synthetic benchmarks are using random data.

Here is how 4 drives performed in a striped mirror:


And with IOMeter


Test 3: ZFS Stripe (RAID0 Equivalant)
For the 3rd test I created a 4-way stripe (RAID 0) using the same 4 drives.


And with IOMeter


I expected write performance to improve more but otherwise no real surprises so far. Next up I'll start with RAIDZ1 and 2 tests using more drives.