WD Ultrastar DC SS 530 SAS SSD for high performance HA storage/Slog

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by gea, Jan 10, 2019.

  1. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    I have made some tests with the WD Ultrastar DC SS 530 as WD promoted it as the fastest SSD. They are 12G dualpath SAS and available with a different 4k random write performance between 100k and 320k write iops and with capacities between 400GB (perfect for an Slog) and 15 TB for regular storage.

    In my first test I want to use it mainly as an Slog in a dualpath HA SAS setup as this is where I cannot use an Optane. First results

    A disk pool from 3 disks in Raid-0 without Slog
    Async write: > 800 MB/s
    Sync Write: 50 MB/s (bad, but normal for disks)

    Same pool with an Slog DC SS 530 400 GB (3DW model)
    sync write: 400 MB/s (a huge jump from 50 MB/s)

    Then I compared a single disk pool from the SS 530 with an Optane 900P.
    As this test was for a vCluster in a box, the SS 530 was in passthrough mode under ESXi and the P900
    on a barebone setup from former tests so values are not exact but give enough insights.

    DC SS 530 (single disk pool), AiO
    Async write: 1007 MB/s
    Sync Write: 550 MB/s

    Optane 900, barebone
    Async Write: 1611 MB/s
    Sync Write: 680 MB/s

    The SS 530 is only 20% slower in this test on sync write.
    Perfect result as this is dualpath SAS and suited for HA.

    And perfect for a high capacity/ high performance filer.


    datasheet
    https://documents.westerndigital.co...-sas-series/data-sheet-ultrastar-dc-ss530.pdf

    see chapter 7.2
    http://www.napp-it.org/doc/downloads/z-raid.pdf
     
    #1
    Patrick likes this.
  2. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    124
    Likes Received:
    5
    Hi!

    Thank you for your tests. It would be great to have tests on an All-Flash-SS530-Pool compared to a disk-pool with SS530-Slog.

    About Slog: Did you find a possiblity to "under-provision" the SSD?

    Can you tell us something about your test-setup (block-size, outstanding IOs, parallel streams)?
     
    #2
    Last edited: Jan 10, 2019
  3. azev

    azev Active Member

    Joined:
    Jan 18, 2013
    Messages:
    536
    Likes Received:
    128
    WOW, I didn't know that WD/HGST are able to come up with a SAS based SSD with performance that can compete even with top NVME drive, this is very impressive.
     
    #3
  4. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    It is a flash device so overprovisioning is possible (Have not tried a host protected area but at least partitioning of a new or secure erased SSD is always an option. But with 320k write iops at 4k with 10WD models this seems not needed).

    My tests are a default run of a napp-it benchmark series with filebench random write.f and default ZFS settings (128k recordsize, compress off) to compare sync off vs sync always.

    As a single SS 530 can deliver over 2000 MiB/s according to the specs (12G dualpath SAS) and a single fast 12G disk ex a HGST HE is at around 200 MiB/s (single or dualpath) you would need 10 disks to achieve a similar sequential performance so these SSDs can deliver a sequential performance that is not possible with disks. As a disk has around 300 iops vs the SS530 with over 300000 write iops, not comparable.

    But from a practical view, the question is different. If you want say an affordable storage system with 100 TB, capable to deliver 10G network performance without and with sync and with the option of a HA cluster, a SAS disk pool with an SS 530 as Slog is the best option especially as ZFS with its superiour rambased read/write caches can reduce random load from/to the pool massively.

    If you want simply the best io performance, you can use an SSD only pool. With capacities up to 15 TB per SSD, you can also reach the 100 TB like with the above disk based pool easily but at 20 times of the costs.
     
    #4
    Last edited: Jan 11, 2019 at 12:07 AM
  5. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,393
    Likes Received:
    336
    I have a bunch of SS300’s that should also be pretty decent performance. When I look at the spec sheets I agree they seem impressive for SAS3 disks. May be time that I try a benchmark or 2 other than just use them and know they work ok.
     
    #5
    T_Minus likes this.
  6. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    Raw performance of these new SAS SSDs is >12G SAS3.
    Time to think about SAS multipath io solutions.
     
    #6
  7. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    124
    Likes Received:
    5
    Hi!

    Is this really the bottleneck? I think, perhaps on very large blocks, but on smaller blocks, there seem to be other things that slow down...
     
    #7
  8. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,966
    Likes Received:
    638
    Main aspect for me is
    When one talks about real high performance, you mostly mean NVMe, U.2 and Optane at its upper end. While there is no alternative with a single disk to an Optane it lacks features like hotplug, many disks in an array and therefor high capacity and last but not least it is not affordable for higher capacities.

    New generation 12G SAS SSDs fills the gap. Nearly as fast, hotplug, many disks so no capacity limit and performance scale over disks and relatively cheap. A possible performance improvement via mpio is not the main item. It is also availability and HA clustering, not only on a filesystem level like with a cluster filesystem but on service level like NFS or SMB.
     
    #8
  9. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,393
    Likes Received:
    336
    Also a lot of enterprise people still use HPE,Dell,Lenovo etc and hardware raid. NVMe and Optane are great but SAS3 is super simple implementation when you need hardware raid. Lots of things are not built with non raid disks in mind, it’s changing sure but not that fast.

    Getting off track but just to throw out the question.
    SQL always-on , run hardware raid on the servers still or use NVMe and let the product do everything and provide all the redundancy, so far I know what usually provides less impact during a disk failure. (Granted flash has AFR’s one tenth of spinning disks though)
     
    #9
  10. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    124
    Likes Received:
    5
    @Evan:
    Can you tell me, what you mean with AFRs?

    About your question:
    For me, SAS-SSDs are much more flexible, than NVMe. The possibility to user them without raid in a Storage-Spaces (without Direct) or ZFS-cluster makes them flexible...
     
    #10
  11. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,491
    Likes Received:
    338
    Annualized Failure Rate
     
    #11
Similar Threads: Ultrastar high
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it High CPU Usage scoketserver.pl Napp-IT Aug 3, 2015
Solaris, Nexenta, OpenIndiana, and napp-it Update for OmniOS 151014LTS: NFS exhausts memory under high write loads Jul 28, 2015
Solaris, Nexenta, OpenIndiana, and napp-it OmniOS + NappIT VM Appliance: Major Fault/Very high CPU Jan 29, 2014
Solaris, Nexenta, OpenIndiana, and napp-it OpenIndiana + OpenSource Clustering and HA/ High Availablity Apr 27, 2012

Share This Page