Storage Spaces Direct poor write performance -- drives claim to have PLP

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

altmannj

New Member
May 7, 2019
6
0
1
I have a couple drive models which the datasheets claim have power loss protection, however, PowerShell reports "IsPowerProtected" as false after they join the S2D pool. These are enterprise-grade drives. I get sub-optimal performance with VMFleet, about 550MB/s write throughput and suspect that it has to be due to Microsoft showing that these drives aren't PLP.

These are the models I"m using:

TOSHIBA PX05SMB080Y - cache
TOSHIBA PX04SVB192 - storage

Does anyone have any experience with these drives?

Thanks.
 

altmannj

New Member
May 7, 2019
6
0
1
I took one of the drives apart, and I see 6 capacitors, which would seem to support their claim of having PLP. Not sure why Windows detects these as not having PLP.
 

altmannj

New Member
May 7, 2019
6
0
1
While the drives were not in an S2D pool, I had enabled the following options in devmgmt.msc:

"Enable write caching on the device"
"Turn off Windows write-cache buffer flushing on this device"

After setting these, the drives showed as "IsPowerProtected = True" prior to S2D pool creation. After creating the S2D pool and performing a "Get-PhysicalDisk | Get-StorageAdvancedProperty", they all report false and give an error code of 1 for the caching property.

The S2D pool, however, shows "IsPowerProtected = True".

This is a two node setup with dual 10Gbe adapters running RDMA. RDMA runs at over 1.2Gbps and I have RSS configured for the network adapters. There are two of these caching drivings sitting in front of the capacity storage and both are showing as "Journal". These are write-intensive drives with read-intensive capacity drives.

I was off on my initial throughput statement. I'm receiving the following with dskspd:

diskspd.exe -t10 -o32 -b4k -si -Sh -w100 -d60 -D -L -c5g C:\ClusterStorage\WSR1-HV-01\test.io (roughly 350MB/s write throughput with caching turned off)

VMFleet output:

Start-FleetSweep -b 4 -t 8 -o 8 -w 100 -d 60 -p si (testing sequential writes, I see it max out at about 450MB/s on a node).

File copies, although not the most accurate, from and NVMe to the S2D cluster, I get between 300MB/s and 400MB/s. Individual disk performance is far higher when not in an S2D cluster or using Starwinds VSAN.

I'm not sure if these numbers are par for the course on a 2 node S2D setup, or if it's still something to do with windows reporting incorrect drive abilities.
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Have you tried measuring the disk/cpu combination using other software like BTRFS, ZFS, LVM+ext4 or anything else to figure out if this is purely Windows not using the on-disk caches?
 

altmannj

New Member
May 7, 2019
6
0
1
I performed some reference tests by removing the 800GB caching drive and only writing to a non-S2D storage pool containing the 6 capacity drives.

Drive/Storage Spaces configuration

Model: TOSHIBA PX04SVB192 (850MB/s rated sequential write)
Size: 1.92
Qty: 6

Storage Pool Configuration:

LogicalSectorSize = 512
NumberOfColumns = 3
NumberOfDataCopies = 2
PhyiscalSectorSize = 4096
ResiliencySettingName = Mirror

Reference File Copy (from NVMe to storage spaces virtual disk): 1.4GB/s

===Test 1===
Description: Test sequential interlocked write throughput without software/hardware caching

diskspd.exe -t10 -o32 -b4096 -Sh -si -w100 -d60 -D -L -c5g D:\test.io

Output: 588MB/s

===Test 2===
Descripton: Test sequentail interlocked write throughout with software/hardware caching

diskspd.exe -t10 -o32 -b4096 -Sh -si -w100 -d60 -D -L -c5g D:\test.io

Output: 967MB/s

===Test 3===
Descripton: Test sequentail (non-interlocked) write throughout without software/hardware caching

diskspd.exe -t10 -o32 -b4096 -Sh -s -w100 -d60 -D -L -c5g D:\test.io

Output: 967MB/s


===Test 4===
Descripton: Test sequentail (non-interlocked) write throughout with software/hardware caching

diskspd.exe -t10 -o32 -b4096 -s -w100 -d60 -D -L -c5g D:\test.io

Output: 1274.23MB/s

=====

With the 2-node S2D pool, where I can run VMFleet, i was hitting a maximum of 450MB/s. File copies would average around 370MB/s from NVMe to the S2D pool. Disabling the 800GB caching drives by specifying "Set-ClusterS2D -CacheState Disabled" yielded similar results.
 

Connorise

Member
Mar 2, 2017
75
17
8
33
US. Cambridge
File copies, although not the most accurate, from and NVMe to the S2D cluster, I get between 300MB/s and 400MB/s. Individual disk performance is far higher when not in an S2D cluster or using Starwinds VSAN.
Speaking of file copying, I ran into an article Slow SMB files transfer speed. Try using robocopy.
You could try tweaking FirstBurstLenght value HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\000X\Parameters (i.e., aligning it with MaxBurstLength).

P.S. Did you try reaching out to support?
 
  • Like
Reactions: fops

fops

New Member
Jan 29, 2023
4
0
1
Speaking of file copying, I ran into an article Slow SMB files transfer speed. Try using robocopy.
You could try tweaking FirstBurstLenght value HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\000X\Parameters (i.e., aligning it with MaxBurstLength).

P.S. Did you try reaching out to support?
I can confirm, that changing the FirstBurstLenght value dramatically improved my file transfer performance. In addition, OP can check if Starwind folders with disks are excluded on the level of antivirus/defender.