WD Ultrastar DC SS 530 SAS SSD for high performance HA storage/Slog

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I have made some tests with the WD Ultrastar DC SS 530 as WD promoted it as the fastest SSD. They are 12G dualpath SAS and available with a different 4k random write performance between 100k and 320k write iops and with capacities between 400GB (perfect for an Slog) and 15 TB for regular storage.

In my first test I want to use it mainly as an Slog in a dualpath HA SAS setup as this is where I cannot use an Optane. First results

A disk pool from 3 disks in Raid-0 without Slog
Async write: > 800 MB/s
Sync Write: 50 MB/s (bad, but normal for disks)

Same pool with an Slog DC SS 530 400 GB (3DW model)
sync write: 400 MB/s (a huge jump from 50 MB/s)

Then I compared a single disk pool from the SS 530 with an Optane 900P.
As this test was for a vCluster in a box, the SS 530 was in passthrough mode under ESXi and the P900
on a barebone setup from former tests so values are not exact but give enough insights.

DC SS 530 (single disk pool), AiO
Async write: 1007 MB/s
Sync Write: 550 MB/s

Optane 900, barebone
Async Write: 1611 MB/s
Sync Write: 680 MB/s

The SS 530 is only 20% slower in this test on sync write.
Perfect result as this is dualpath SAS and suited for HA.

And perfect for a high capacity/ high performance filer.


datasheet
https://documents.westerndigital.co...-sas-series/data-sheet-ultrastar-dc-ss530.pdf

see chapter 7.2
http://www.napp-it.org/doc/downloads/z-raid.pdf
 
  • Like
Reactions: james23 and Patrick

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Thank you for your tests. It would be great to have tests on an All-Flash-SS530-Pool compared to a disk-pool with SS530-Slog.

About Slog: Did you find a possiblity to "under-provision" the SSD?

Can you tell us something about your test-setup (block-size, outstanding IOs, parallel streams)?
 
Last edited:

azev

Well-Known Member
Jan 18, 2013
768
251
63
WOW, I didn't know that WD/HGST are able to come up with a SAS based SSD with performance that can compete even with top NVME drive, this is very impressive.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
It is a flash device so overprovisioning is possible (Have not tried a host protected area but at least partitioning of a new or secure erased SSD is always an option. But with 320k write iops at 4k with 10WD models this seems not needed).

My tests are a default run of a napp-it benchmark series with filebench random write.f and default ZFS settings (128k recordsize, compress off) to compare sync off vs sync always.

As a single SS 530 can deliver over 2000 MiB/s according to the specs (12G dualpath SAS) and a single fast 12G disk ex a HGST HE is at around 200 MiB/s (single or dualpath) you would need 10 disks to achieve a similar sequential performance so these SSDs can deliver a sequential performance that is not possible with disks. As a disk has around 300 iops vs the SS530 with over 300000 write iops, not comparable.

But from a practical view, the question is different. If you want say an affordable storage system with 100 TB, capable to deliver 10G network performance without and with sync and with the option of a HA cluster, a SAS disk pool with an SS 530 as Slog is the best option especially as ZFS with its superiour rambased read/write caches can reduce random load from/to the pool massively.

If you want simply the best io performance, you can use an SSD only pool. With capacities up to 15 TB per SSD, you can also reach the 100 TB like with the above disk based pool easily but at 20 times of the costs.
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I have a bunch of SS300’s that should also be pretty decent performance. When I look at the spec sheets I agree they seem impressive for SAS3 disks. May be time that I try a benchmark or 2 other than just use them and know they work ok.
 
  • Like
Reactions: T_Minus

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Raw performance of these new SAS SSDs is >12G SAS3.
Time to think about SAS multipath io solutions.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Is this really the bottleneck? I think, perhaps on very large blocks, but on smaller blocks, there seem to be other things that slow down...
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Main aspect for me is
When one talks about real high performance, you mostly mean NVMe, U.2 and Optane at its upper end. While there is no alternative with a single disk to an Optane it lacks features like hotplug, many disks in an array and therefor high capacity and last but not least it is not affordable for higher capacities.

New generation 12G SAS SSDs fills the gap. Nearly as fast, hotplug, many disks so no capacity limit and performance scale over disks and relatively cheap. A possible performance improvement via mpio is not the main item. It is also availability and HA clustering, not only on a filesystem level like with a cluster filesystem but on service level like NFS or SMB.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Also a lot of enterprise people still use HPE,Dell,Lenovo etc and hardware raid. NVMe and Optane are great but SAS3 is super simple implementation when you need hardware raid. Lots of things are not built with non raid disks in mind, it’s changing sure but not that fast.

Getting off track but just to throw out the question.
SQL always-on , run hardware raid on the servers still or use NVMe and let the product do everything and provide all the redundancy, so far I know what usually provides less impact during a disk failure. (Granted flash has AFR’s one tenth of spinning disks though)
 

Stril

Member
Sep 26, 2017
191
12
18
41
@Evan:
Can you tell me, what you mean with AFRs?

About your question:
For me, SAS-SSDs are much more flexible, than NVMe. The possibility to user them without raid in a Storage-Spaces (without Direct) or ZFS-cluster makes them flexible...