S2D slow write (no parity & NVMe cache)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
We can all keep our fingers crossed, but from the multiple discussions with Microsoft developers this won't happen ever. SSDs unlike spinning disks don't have "Force Unit Attention" flag in writes tolerated forcing ACK returned only after write buffer will "touch" actual storage medium.

[ ... ]

Also, if you want support for non-PLP SSDs in S2D, there's a request open on the Windows Server UserVoice forums:
Storage Spaces allow option for volatile cache (aka consumer SSDs)
 

wimoy

New Member
May 15, 2020
2
0
1
A dedicated journal generally does increase performance with parity spaces. Never worked with them, though. And sadly no longer have access to the old playground to test it now. Sorry.
I bought a three 16GB Intel Optane drives and used them as journal drives for a dual parity setup. Seq/Random write performance improved by 2-4x.

It appears you can get away with non-PLP for data as long as you have PLP or Optane for journal drives.
 

Vader

New Member
Dec 5, 2020
3
0
1
After settings this up we saw a frustrating 450 Mb/s sequentiell write rate and about 6000 IOPS (4K) with IOMeter. Then we found this thread and found out about the PLP problem with consumer NVME/SSDs. The check with Get-PhysicalDisk |? FriendlyName -Like "*" | Get-StorageAdvancedProperty showed that no drive had PLP (IsPowerProtected False, although the PM883 should have PLP) and that we are not able to retrieve the write cache status (WARNING: Retrieving IsDeviceCacheEnabled failed with ErrorCode 1).

The solution to this was to use this command (Set-StoragePool -FriendlyName "Cluster Name" -IsPowerProtected $true) to overrule the PLP and even more important to check the Policy "Enable write caching on the device" in the Device Manager/Disk drives. The sad thing is that you have to set this policy every time you reboot the server again. That should maybe not the the case with PLP drives.

After that we had over 2,2 GB/ seq write speed and over 90ooo IOPS. The combined Read/Write performance (4K 50/50) is about 130000 IOPS and the Read IOPS is about 400000!
I am just testing a W2019 s2d lab as well. I have 2 nodes with 8 consumer SSDs, read performance is outstanding but write is far too slow. In a nested mirror setup I gain only 50 MB/s. I override in device manager and enabled write cache, the pool is power protected with $true.

My goal were also to use the Samsung PM883, because they are advertised with PLP. Are you still happy with these and write perfromance is still OK? Nevertheless, they should be listed as power protected under StorageAdvancedPorperty, I do not understand your problem. Maybe the problem is the HBA for the SATA drives?

Did you you use the NVMe drives as cache drives? I just configured the pool with SSD SATA drives, no NVMe cache.

When I check Windows Server Catalog, I find the Samsung PM883 as certified for SDDC with W2019. Maybe I go better for Intel D3-4510, they are in the same price range and also stated as certified for SDDC.
 
Last edited:

hyltcasper

Member
May 1, 2020
45
4
8
Have been manually change PLP flag solves the speed issue?
If yes, power issue is very simple. There are many capacitor backed nvme pcie cards.
JEYI MX16-1U M.2 NVMe
Ugreen NVME Card
Asus hyper m2 v2
I think the capacity is enough. If not, you can increase capacitor capacity by soldering same capacitors in parallel.