RAID10 Config Recommendations

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

masterchief07

New Member
Feb 18, 2020
29
0
1
Hey all! Currently have 8 x Toshiba PX05SRB192's installed in a Proliant DL360 G10 connected to HPE Smart Array P408i-a w/ 2GB cache. Current RAID Array config:

RAID 10
Stripe: 1024kb
Block Size: 512b
Write Cache Enabled
HPE Smart Path Enabled

Will be using this server as an ESXi host for a variety of VMs, so mixed use. I've been playing around with different stripe sizes, caching preferences, but figured I would throw it out here too for any recommendations in terms of achieving an optimal, balanced performance. I have yet to try disabling Smart Path, but will do that next and retest.

Anyways, current performance metrics at the above config are below. I feel like it is not performing nearly as well as it should to be honest.

1658371728526.png
1658370682563.png
1658370978565.png
 
Last edited:

nabsltd

Well-Known Member
Jan 26, 2022
422
284
63
I'm not familiar with ProLiants and their backplanes, but if you have only one SAS cable (SFF-8643, Mini-SAS HD, and other names), that limits you to 48Gbps total (4 lanes * 12Gbps/lane). If that's the case, you are saturating the cable for reads (4784 MB/s is about 48Gbps, taking into account protocol overhead).

If you can plug in two cables, that should double your available bandwidth between the controller and the backplane.
 
  • Like
Reactions: BoredSysadmin

masterchief07

New Member
Feb 18, 2020
29
0
1
Appreciate the insight but yeah, using 2 cables. :/

Looking to see if somehow only 1 is operating or is somehow disabled.
 

masterchief07

New Member
Feb 18, 2020
29
0
1
Looks like the drives may be operating in single port mode...PHY 2 unknown. Exploring this further.

1658437980144.png
 
Last edited:

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
I take it not all 10 drives are in the RAID 10 array ? as 4 needed per set (2x2 in RAID1 then those RAID1's in RAID0)
Not sure how the RAID controller handles RAID 10 would the drives be mirrored then striped this could possibly limited the speed of data going on/off array
8x drives would easy saturate the 7.8GBps PCIe 3.0 bus if running at 1.8GBps read as rated

Those 2 spare drives set one up as JBOD (if pos) or single RAID 0 and see performance, then maybe a 2x drive RAID 0
This will be the fastest the array could ramp up with more drives added
 

masterchief07

New Member
Feb 18, 2020
29
0
1
Ahh, I'm sorry, I have 8x drives within the array, 2x cold spare offline. And yeah, that was my thinking. I should easily be able to saturate the bus.

Now here's an interesting note. Installed Windows directly to the array and reformatted at 256/1024kb full stripe. HPE Smart Path enabled. Going to try with controller caching next and then with both disabled. I wonder if ESXi was not optimized properly. I was running HPE's 7u3, maybe a driver issue?


1658453682627.png
1658453970870.png

EDIT: HPE Smart Path and Controller Caching Disabled

1658454518391.png
1658454802955.png

EDIT2: Controller caching enabled, 70/30.

1658455564012.png
1658455860978.png
 
Last edited:

masterchief07

New Member
Feb 18, 2020
29
0
1
Reinstalled vSphere 7u3, installed WinSrv 2022, VMTools 12.06 and ran tests again. There is a performance hit running through VM. This is with controller caching only enabled.

1658498860058.png
1658499376175.png
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
I don't know how the controller on the ssd handles the dual port part: if it spreads the bandwidth equally over the ports than the write speed would make sense (4x 900MByte/s ~3600MByte/s)