Options for NVMe raid on Windows.

Discussion in 'Windows Server, Hyper-V Virtualization' started by Jeff Robertson, Sep 2, 2019.

  1. Jeff Robertson

    Jeff Robertson Active Member

    Joined:
    Oct 18, 2016
    Messages:
    372
    Likes Received:
    84
    Hi, I am going to purchase a new EPYC 7002 system as soon as SM releases their new boards. The system will be running a pair of SATA SSDs in raid 1 on a sas controller as the boot drive. I would like to run VMs off of NVMe drives but don't like the idea of using single drives in case of a failure. So what are my options for running a pair of NVMe drives in RAID1 (refs volume using dedupe on server 2019)? So far the only two I can think of are windows software raid or storage spaces. I am open to using either as long as performance doesn't suffer too much. Any other ideas or any thoughts on the performance of using either of those software options?

    Thanks!
     
    #1
  2. ari2asem

    ari2asem Member

    Joined:
    Dec 26, 2018
    Messages:
    211
    Likes Received:
    17
    for software solutions you can use snapraid.

    disadvantage of snapraid is that you have to sync and scrub manualy your array. performance is the same as a single nvme drive.

    there are also some scripts to automate snapraid
     
    #2
  3. TerryPhillips

    TerryPhillips New Member

    Joined:
    May 7, 2019
    Messages:
    23
    Likes Received:
    6
    SM's current H11SSL-NC offers the LSI3008 RAID chip that supports up to 8 drives for your OS RAID. Their website only mentions "SAS" with the 3008 but I did inquire with SM's tech support with regards to SATA support and was informed it is supported and that 2 logical drives can be created (I generally build 1 or 2 RAID 10 drives of 4 disks each). I believe this is the only Rome MB of offering on-board RAID support. Although certainly not a requirement with the abundance of PCI-E slots available, one could easily add a RAID card of choice.

    The other feature of the H11SSL-NC is the addition of 2 NVMe ports via mini-sas connectors which would work great for a U2 drive like Intel's Optane 905P, But there's no HW RAID capability for these 2 ports which leaves you to your options as noted. Additionally, there is the M2 port which could either be utilized to augment the OS or in conjunction with the other 2 NMVe storage drives.

    One other consideration would be a HCI setup with a pair of these. I run a 2 node HCI with SM's X11SCL-LN4F and E-2176G processor. Storage Spaces Direct is running on 2x Optane 905p, 4x Intel 3700, and 8 Seagate 2.5 1TB drives and I easily see 300K IOPs. The 2 limiting factors with my current setup is Core 0 maxes out trying to manage the I/O and the initial MB support of 64GB max RAM. Even so, I run 2 dozen VMs and the performance is snappier than I could ever hope for. There is a BIOS update to address both issues, just need time to make it happen.

    My next HCI iteration will likely be the H11SSL-NC (or predecessor), Epyc Rome 16C CPU and benefit from cores and memory channels/bandwidth. Though I am curious to see how the Rome CPU handles the I/O load...
     
    #3
  4. Jeff Robertson

    Jeff Robertson Active Member

    Joined:
    Oct 18, 2016
    Messages:
    372
    Likes Received:
    84
    Interesting solution, I will have to look into it. I do have a few questions. How does snapraid work as virtual machine storage? Can it be formatted to refs with dedupe enabled? Will it work on server 2019?

    Thanks!

    Jeff R.
     
    #4
  5. Jeff Robertson

    Jeff Robertson Active Member

    Joined:
    Oct 18, 2016
    Messages:
    372
    Likes Received:
    84
    Another S2D fan I see. I am actually breaking down and selling a two node S2D cluster (e5-2699V4 CPUs) and going stand alone. I am having two issues with S2D that have me frustrated enough to call it quits. First I am getting terrible write speed since the drives aren't being detected as having PLP even though they are all on the supported list (LSI 3008 in IT mode with 4x Toshiba HK4 drives plus 2x Intel P3700 for caching per node). This leaves me with random write speeds in the single digits per VM (like 6MB/sec bad). Second if I shut node 1 down node 2 takes over. If I shut node 2 down the whole thing crashes no matter what I do. I really like the idea of S2D and I'm sure it works great in larger clusters but I've just had no luck with it.

    I figure I'll get a 24 core epyc as the main system and a e-2286g 8 core for a backup low power system and call it a day. More than enough juice for a few dozen VMs and I don't have to worry about S2D and its performance problems.
     
    #5
  6. ari2asem

    ari2asem Member

    Joined:
    Dec 26, 2018
    Messages:
    211
    Likes Received:
    17
    snapraid.it

    virtual machine storage? you mean snapraid inside virtual machine? or virtual machine running on snapraid?

    snapraid is software raid, it is not file system.

    SnapRAID

    ReFS is not supported. NTFS is well supported.
     
    #6
  7. Philmatic

    Philmatic Member

    Joined:
    Sep 15, 2011
    Messages:
    87
    Likes Received:
    46
    Don’t Epyc motherboards support motherboard based RAID for NVMe like the consumer side does? Something similar to VRoC?

    I’m currently running a 4 way RAID 0 array with 4 NVMe SSDs on an X570 board, 12GB reads and 5GB writes.
     
    #7
  8. Jeff Robertson

    Jeff Robertson Active Member

    Joined:
    Oct 18, 2016
    Messages:
    372
    Likes Received:
    84
    I *think* that is limited to consumer boards, someone correct me if I"m wrong. To my knowledge epyc systems don't have that built in which is a shame.
     
    #8
  9. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,865
    Likes Received:
    429
    And one of the main reason enterprise SAS lives on... because of hardware raid.

    I have yet to see any satisfactory raid for a windows OS on NVMe. (Linux there is options, in theory windows also but I don’t think it works well enough, not robust enough in my limited testing, or performing)
     
    #9
  10. edge

    edge New Member

    Joined:
    Apr 22, 2013
    Messages:
    11
    Likes Received:
    2
    I have a hard time believing there is much future in hardware raid with NVMe. Hardware raid is chip based which invariably puts it on the PCIe lanes and BAM - bandwidth just died. The architecture is flawed.
     
    #10
    TerryPhillips likes this.
  11. Philmatic

    Philmatic Member

    Joined:
    Sep 15, 2011
    Messages:
    87
    Likes Received:
    46
    I don’t think anyone is truly looking for hardware raid for NVMe right? That wouldn’t make any sense, we just want driver supported soft raid inside UEFI like VROC or AMD Ryzen/TR softraid.
     
    #11
  12. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,865
    Likes Received:
    429
    Sorry was not suggesting there was any future for NVMe hardware raid, was just pointing out why SAS especially for boot disks still has a market.
     
    #12
  13. edge

    edge New Member

    Joined:
    Apr 22, 2013
    Messages:
    11
    Likes Received:
    2
    No need to apologize, I didn't mean to come off that way. I agree that SAS isn't going away anytime soon.
     
    #13
Similar Threads: Options NVMe
Forum Title Date
Windows Server, Hyper-V Virtualization S2D slow write (no parity & NVMe cache) May 22, 2018
Windows Server, Hyper-V Virtualization Samsung PM/SM961 NVME disk not in diskpart Jan 10, 2018

Share This Page