Hi guys. I'm looking for some help determining if the storage performance I'm seeing makes sense.
Some system specs;
barebones server: Intel R2312GL4GS
controller: Intel RMS25JB080 (IR firmware 20)
disks: 4 x SAMSUNG PM853T 960GB
ram: 16 x 8GB (128GB)
cpu: 2 x E5-2670
So, I boot into the Intel Controller configuration (CTRL+C I believe). I select the option to build a RAID10 and select the 4 disks. I waited until it was done initializing since I know that can impact testing.
I have tested performance of the 4 disk RAID10 from a bare-metal Win2012R2 install and as ESXi6 VM's (linux and windows). Performance is basically identical in all cases. Also tested each disk individually on bare-metal. Here are the AS-SSD benchmark results.
Benchmarks of each disk individually (bare-metal)
Benchmarks of RAID10 as ESXi datastore (Windows Server VM)
Benchmarks of RAID10 on bare-metal Windows Server
I also ran iometer tests, which again showed roughly the same results.
I feel like something is off, but then again I could just be misunderstanding. Isn't RAID10 performance supposed to show 2x writes and 4x reads? Considering the results of the individual disks, it doesn't seem that way. These were quick tests so perhaps I'm not doing something I need to.
Initially I wanted to test this because I've read multiple people state that with ESXi you need a controller with "cache and backup battery" or you can experience really poor performance. And while I could see that being a case when using HDD's, I wanted to test it with SSD's because I suspected it wouldn't matter (which so far it doesn't seem to).
Some system specs;
barebones server: Intel R2312GL4GS
controller: Intel RMS25JB080 (IR firmware 20)
disks: 4 x SAMSUNG PM853T 960GB
ram: 16 x 8GB (128GB)
cpu: 2 x E5-2670
So, I boot into the Intel Controller configuration (CTRL+C I believe). I select the option to build a RAID10 and select the 4 disks. I waited until it was done initializing since I know that can impact testing.
I have tested performance of the 4 disk RAID10 from a bare-metal Win2012R2 install and as ESXi6 VM's (linux and windows). Performance is basically identical in all cases. Also tested each disk individually on bare-metal. Here are the AS-SSD benchmark results.
Benchmarks of each disk individually (bare-metal)
Benchmarks of RAID10 as ESXi datastore (Windows Server VM)
Benchmarks of RAID10 on bare-metal Windows Server
I also ran iometer tests, which again showed roughly the same results.
I feel like something is off, but then again I could just be misunderstanding. Isn't RAID10 performance supposed to show 2x writes and 4x reads? Considering the results of the individual disks, it doesn't seem that way. These were quick tests so perhaps I'm not doing something I need to.
Initially I wanted to test this because I've read multiple people state that with ESXi you need a controller with "cache and backup battery" or you can experience really poor performance. And while I could see that being a case when using HDD's, I wanted to test it with SSD's because I suspected it wouldn't matter (which so far it doesn't seem to).