Are you using RoCE RDMA or iWARP RDMA? The RoCE requires the DCB and QoS to operate sucessfully. We are switching the network cards from Mellanox ConnectX-3 Pro to Chelsio T580-LP-CR because of the QoS settings on RoCE.
Not yet, I opened two cases with Microsoft to see if another support engineer might know a solution. The strange thing is that the disks locally without S2D clustering, works perfect. I did some testing with diskspd and tested the random read/write performance with PM1725 locally and the results...
We openened a support ticket with Microsoft to investigate the issue. We switched from Failover NIC team to a SET switch and have RoCE configured properly now. However the writes are still horrible slow. If you do a test locally the drives are doing perfect. I tried to set the IsPOwerProtected...
I'm having the same issue with Samsung PM1725b SSD drives as cache in a S2D cluster. The S2D is setup as three way mirror and in total I have three cluster members with 8x SAS drives and 2x Samsung PM1725b each node. I ran the command Get-PhysicalDisk |? FriendlyName -Like "*" |...
I tried that command but the parameter was already set as PLP. The Samsung P1725 supports PLP so it should automaticly detect the right settings. I benchmarked all my NVMe SSD's in the cluster, below you see a screenshot of one of the Cluster nodes performaning a disk benchmark on a local NVMe...
I ran some more benchmarks with crystal disk mark.
Inside the S2D storage pool VM running Windows Server 2019:
After the CrystalDiskMark I ran another fio benchmark, the results are still slow there.
I deployed a 3 node S2D cluster to run VM's in the failover cluster. I made the cluster with the following hardware configuration. Each node has the same configuration.
Dell PowerEdge R740xd
1x Xeon Gold 6152 22 Cores
386GB DDR-4 RAM
8x 1.2TB Seagate ST1200mm007 1.2TB SAS HDD
2x PM1725 3.2TB...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.