After settings this up we saw a frustrating 450 Mb/s sequentiell write rate and about 6000 IOPS (4K) with IOMeter. Then we found this thread and found out about the PLP problem with consumer NVME/SSDs. The check with Get-PhysicalDisk |? FriendlyName -Like "*" | Get-StorageAdvancedProperty showed that no drive had PLP (IsPowerProtected False, although the PM883 should have PLP) and that we are not able to retrieve the write cache status (WARNING: Retrieving IsDeviceCacheEnabled failed with ErrorCode 1).
The solution to this was to use this command (Set-StoragePool -FriendlyName "Cluster Name" -IsPowerProtected $true) to overrule the PLP and even more important to check the Policy "Enable write caching on the device" in the Device Manager/Disk drives. The sad thing is that you have to set this policy every time you reboot the server again. That should maybe not the the case with PLP drives.
After that we had over 2,2 GB/ seq write speed and over 90ooo IOPS. The combined Read/Write performance (4K 50/50) is about 130000 IOPS and the Read IOPS is about 400000!
I am just testing a W2019 s2d lab as well. I have 2 nodes with 8 consumer SSDs, read performance is outstanding but write is far too slow. In a nested mirror setup I gain only 50 MB/s. I override in device manager and enabled write cache, the pool is power protected with $true.
My goal were also to use the Samsung PM883, because they are advertised with PLP. Are you still happy with these and write perfromance is still OK? Nevertheless, they should be listed as power protected under StorageAdvancedPorperty, I do not understand your problem. Maybe the problem is the HBA for the SATA drives?
Did you you use the NVMe drives as cache drives? I just configured the pool with SSD SATA drives, no NVMe cache.
When I check Windows Server Catalog, I find the Samsung PM883 as certified for SDDC with W2019. Maybe I go better for Intel D3-4510, they are in the same price range and also stated as certified for SDDC.