Performance of this RAID10 make sense?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Hi guys. I'm looking for some help determining if the storage performance I'm seeing makes sense.

Some system specs;
barebones server: Intel R2312GL4GS
controller: Intel RMS25JB080 (IR firmware 20)
disks: 4 x SAMSUNG PM853T 960GB
ram: 16 x 8GB (128GB)
cpu: 2 x E5-2670

So, I boot into the Intel Controller configuration (CTRL+C I believe). I select the option to build a RAID10 and select the 4 disks. I waited until it was done initializing since I know that can impact testing.

I have tested performance of the 4 disk RAID10 from a bare-metal Win2012R2 install and as ESXi6 VM's (linux and windows). Performance is basically identical in all cases. Also tested each disk individually on bare-metal. Here are the AS-SSD benchmark results.

Benchmarks of each disk individually (bare-metal)
Benchmarks of RAID10 as ESXi datastore (Windows Server VM)
Benchmarks of RAID10 on bare-metal Windows Server

I also ran iometer tests, which again showed roughly the same results.

I feel like something is off, but then again I could just be misunderstanding. Isn't RAID10 performance supposed to show 2x writes and 4x reads? Considering the results of the individual disks, it doesn't seem that way. These were quick tests so perhaps I'm not doing something I need to.

Initially I wanted to test this because I've read multiple people state that with ESXi you need a controller with "cache and backup battery" or you can experience really poor performance. And while I could see that being a case when using HDD's, I wanted to test it with SSD's because I suspected it wouldn't matter (which so far it doesn't seem to).
 

aero

Active Member
Apr 27, 2016
346
86
28
54
Out of curiosity, can you try a 4 disk raid-0 to see if the speeds scale any better?
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Raid0 using the controller right? I have two of these with the exact setup setup and performance so I can do a raid0, just give me a bit to set it up.
 

aero

Active Member
Apr 27, 2016
346
86
28
54
Yes, same controller. Unless you have an additional controller you can add and test as well.
 

aero

Active Member
Apr 27, 2016
346
86
28
54
If it makes you feel better my 4 disk SSD raid10 also isn't scaling properly for reads. They're rated for 340MB/s read, 100MB/s write.

I did read somewhere not to expect 4x performance on 4-disk raid10 though, more like 2.5-3x, which this is certainly achieving. I can't find the article at the moment though.

80GB Intel S3500's
using MDADM software raid in ubuntu 14.04

maxing out at 987.525MB/s sequential reads
203.716MB/s sequential writes (which is perfect)

edit: I suppose I should break up the array and test a single drive...<sigh>
 

aero

Active Member
Apr 27, 2016
346
86
28
54
single drive tested great at 340 seq read / 100 seq write.

I reassembled the array, but when I tested this time I increased the iozone thread count to get larger queue depth and was able to get 1329.43MB/s, so very close to 4x performance of single.

Maybe try a different testing utility with a configurable queue depth?
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Maybe try a different testing utility with a configurable queue depth?
When you say thread count, do you mean outstanding IO? Edit: just realized you said iozone.

Ran iometer before alongside the AS SSD tests. RAW disks. Tried different number of workers (up to 32), maximum disk size, # of outstanding IO (up 64), access sequences, and so on. For both individual disks and the RAID10. The numbers were essentially the same as AS SSD. So I didn't bother saving those results.
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Out of curiosity, can you try a 4 disk raid-0 to see if the speeds scale any better?
So I finally got around to testing this.
I'm seeing slightly better writes (particularly in seq). But it doesn't seem to be scaling like it should. I figured I'd be seeing higher 4K-64Thrd numbers for instance.

Benchmark of RAID0 on bare-metal Windows Server

This isn't adding up to me. I feel like I must be missing something obvious here...
 
Last edited: