High performance DAS - NVMe or SSD

Discussion in 'Hard Drives and Solid State Drives' started by Stril, May 8, 2018.

  1. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    178
    Likes Received:
    9
    Hi!

    I need to build a Windows-2016-Server with a VERY fast DAS Storage. I need at least 8 TB of storage.

    Option 1:
    4x Intel DC P4600 4 TB
    Raid 10 - Windows-Software-Raid?

    Option 2:
    32x Intel DC S4600 480 GB
    Raid 10 on Avago 9361-8i?


    What would you prefer?

    I did never use a Raid-System with NVMe-Cards. Is this possible? I will loose hotswap-capability, but should get better performance, or am I wrong?

    Thank you for your help!

    Stril
     
    #1
  2. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,677
    Likes Received:
    409
    Define "fast", low latencies or high throuhput? :D

    I would (try) to go with less devices so option 1, but without more informations about the use cases it's just a preference.

    If you want to use mirrors ("raid 10") you could look at storage spaces, it's microsofts version of software defined storage.

    It's possible to use raid for nvme devices, most solutions use software raid but there are also a few hardware solutions. Like with hdds performance depends about what raid type you use (mirror, striping or parity), how many devices you have in a logical devices and so on.

    How swapping is possible if your chassis/backplane, mainboard and os/raid software support it.
     
    #2
  3. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    178
    Likes Received:
    9
    Hi!

    I need low latency and high IOPS.

    @StorageSpaces:
    I would use StorageSpaces as Raid 10. I do not like parity...

    Did you ever test the performance of systems like these?
     
    #3
  4. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    785
    Likes Received:
    278
    I've done some benchmarking with Windows 2012 software raid, mirrors and stripes. The performance is not really great, you are in fact better off using some kind of hardware raid, or a SAN. It may be better in 2016, but I would seriously consider a hardware raid alternative.

    Another thing, Intel has that Optane Cache software, I'm not sure if this works on server grade hardware yet, but I'm pretty sure by that using that you get high and stable iops and low latency.
     
    #4
  5. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,677
    Likes Received:
    409
    Software raid or storage spaces?
    With storage spaces + refs microsft had two major updates since server 2012 (refs 1.x -> 2.x -> 3.x) changing a lot of stuff.
     
    #5
  6. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    178
    Likes Received:
    9
    Hi!

    Thats why I am asking. I do not trust in the new hardware-raid-controller for NVMe. So, my option would be StorageSpaces with ReFS.

    Did anybody here ever use this combination in a high-performance environment like mine?
     
    #6
  7. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    785
    Likes Received:
    278
    I have not tested storage spaces, but if it is now at Linux / ZFS performance level, then that is good news. A hardware raid card is still faster though, at least for burst io.
     
    #7
  8. wvaske

    wvaske New Member

    Joined:
    Apr 12, 2017
    Messages:
    7
    Likes Received:
    2
    I do performance testing on all-flash SDS solutions on Microsoft. Mirrors in storage spaces with flash scale to higher performance than hardware raid options using the same devices (SAS or SATA -- I haven't tested the NVMe hardware RAID but I would expect them to limit performance over what Storage Spaces will offer).

    I'm currently testing a Storage Spaces Direct configuration (distributed storage). 32x SATA drives across 4 nodes with 3-way mirroring hits 1.5 million 4k Random Read IOPS. 10GB/s large block reads.

    I've done testing with NVDIMMs and NVMe drives as well and Storage Spaces gives phenomenal performance.
     
    #8
    T_Minus likes this.
  9. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,828
    Likes Received:
    1,484
    Do companies often deploy 32+ unit SATA SSD storage spaces setups or is this rare?
     
    #9
  10. wvaske

    wvaske New Member

    Joined:
    Apr 12, 2017
    Messages:
    7
    Likes Received:
    2
    32 drives (8 per node in a 4-node configuration) is a reasonably common configuration. Most customers I talk to are deploying hybrid configurations with 2-4 SSD cache drives (NVMe, SAS, or SATA) and 8-12 7.2k HDDs for capacity (per node).
     
    #10
  11. azev

    azev Active Member

    Joined:
    Jan 18, 2013
    Messages:
    619
    Likes Received:
    157
    @wvaske do you mind sharing your configuration to get that kind of performance with storage spaces ?
     
    #11
  12. wvaske

    wvaske New Member

    Joined:
    Apr 12, 2017
    Messages:
    7
    Likes Received:
    2
    Just to be clear, this is storage spaces direct--the distributed storage solution that Microsoft is doing these days. It's a 4-node configuration (2-socket Intel 6142 or 6148 with 384GB memory).
    Each node is using 8x read-intensive Enterprise SATA SSDs.
    Networking is 100Gb Mellanox for RDMA support.

    To get the performance mentioned, I need a large number of VMs with each running workload. (number of VMs equal to the number of hyper-threaded cores. That's 64-VMs per host with the 6142s.
    I can hit 1m Random Read IOPS with 20 VMs per host and medium queue depth workloads (QD = 4-8) on each VM. (Each VM is using a single thread to run workload. In general, I see equivilent performance with fewer big VMs that use many threads vs more small VMs with few/single threads).

    A full write up will show up here in the next month or so: Storage
     
    #12
  13. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,950
    Likes Received:
    859
    Instead of traditional DAS, 100Gbe NVMe over fabrics. You can start direct attach then move it on network and use with others later.
     
    #13
    wvaske likes this.
  14. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,860
    Likes Received:
    427
    I won’t say that SATA is that common but SAS yes absolutely and I can’t see why you couldn’t use SATA, I would certianly consider it if I was building certain systems where power or simplicity was most important, not needing a SAS adapter can really work for some people.
     
    #14
  15. wvaske

    wvaske New Member

    Joined:
    Apr 12, 2017
    Messages:
    7
    Likes Received:
    2
    Here's some data. I have a SuperMicro server with 2x Intel Xeon Gold 6142 (15c @ 2.6Ghz) with 384GB of memory. I have a 24-port LSI HBA with 8x Read Intensive SATA drives and 8x Write Intensive SATA Drives (No expander). I also have 4x Read Intensive NVMe drives. (All drives are enterprise SSDs with full power protection from a major manufacturer)

    I have the SATA drives configured with Storage Spaces with 3-way mirroring and the NVMe drives configured with 2-way mirroring (not enough drives for 3-way). The virtual disks are thick provisioned and the drives were all secure erased right before volume creation. All volumes use ReFS with default allocation size.

    8x Write Intensive SATA Drives:
    upload_2018-5-10_15-33-40.png

    8x Read Intensive SATA Drives:
    upload_2018-5-10_15-39-52.png

    4x Read Intensive NVMe Drives:
    upload_2018-5-10_15-44-34.png


    I also did one final test with all 16x SATA drives in a single pool. Storage Spaces will only use a capacity on each drive equal to the capacity of the smallest drive so the 4TB RI drives will look like 2TB drives. This is to see the impact of 'spindle count' on the performance numbers.
    16x SATA Drives:
    upload_2018-5-10_16-7-21.png

    So large block scales with dirves, small block really doesn't unless you have enough threads to do something with it.
     
    #15
Similar Threads: High performance
Forum Title Date
Hard Drives and Solid State Drives High (somehow) performance SSD for Centos workstation Oct 22, 2015
Hard Drives and Solid State Drives High Performance SSD ZFS Pool Mar 19, 2015
Hard Drives and Solid State Drives SSD recommendations for high TBW Sep 30, 2019
Hard Drives and Solid State Drives What's the best high endurance enterprise M.2 on the market right now? Aug 28, 2019
Hard Drives and Solid State Drives Issues with Dell dual Internal SD Modules on Gen 12 or higher servers? Aug 26, 2019

Share This Page