Options for NVMe raid on Windows.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jeff Robertson

Active Member
Oct 18, 2016
429
115
43
Chico, CA
Hi, I am going to purchase a new EPYC 7002 system as soon as SM releases their new boards. The system will be running a pair of SATA SSDs in raid 1 on a sas controller as the boot drive. I would like to run VMs off of NVMe drives but don't like the idea of using single drives in case of a failure. So what are my options for running a pair of NVMe drives in RAID1 (refs volume using dedupe on server 2019)? So far the only two I can think of are windows software raid or storage spaces. I am open to using either as long as performance doesn't suffer too much. Any other ideas or any thoughts on the performance of using either of those software options?

Thanks!
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
for software solutions you can use snapraid.

disadvantage of snapraid is that you have to sync and scrub manualy your array. performance is the same as a single nvme drive.

there are also some scripts to automate snapraid
 

TerryPhillips

New Member
May 7, 2019
23
6
3
SM's current H11SSL-NC offers the LSI3008 RAID chip that supports up to 8 drives for your OS RAID. Their website only mentions "SAS" with the 3008 but I did inquire with SM's tech support with regards to SATA support and was informed it is supported and that 2 logical drives can be created (I generally build 1 or 2 RAID 10 drives of 4 disks each). I believe this is the only Rome MB of offering on-board RAID support. Although certainly not a requirement with the abundance of PCI-E slots available, one could easily add a RAID card of choice.

The other feature of the H11SSL-NC is the addition of 2 NVMe ports via mini-sas connectors which would work great for a U2 drive like Intel's Optane 905P, But there's no HW RAID capability for these 2 ports which leaves you to your options as noted. Additionally, there is the M2 port which could either be utilized to augment the OS or in conjunction with the other 2 NMVe storage drives.

One other consideration would be a HCI setup with a pair of these. I run a 2 node HCI with SM's X11SCL-LN4F and E-2176G processor. Storage Spaces Direct is running on 2x Optane 905p, 4x Intel 3700, and 8 Seagate 2.5 1TB drives and I easily see 300K IOPs. The 2 limiting factors with my current setup is Core 0 maxes out trying to manage the I/O and the initial MB support of 64GB max RAM. Even so, I run 2 dozen VMs and the performance is snappier than I could ever hope for. There is a BIOS update to address both issues, just need time to make it happen.

My next HCI iteration will likely be the H11SSL-NC (or predecessor), Epyc Rome 16C CPU and benefit from cores and memory channels/bandwidth. Though I am curious to see how the Rome CPU handles the I/O load...
 

Jeff Robertson

Active Member
Oct 18, 2016
429
115
43
Chico, CA
for software solutions you can use snapraid.

disadvantage of snapraid is that you have to sync and scrub manualy your array. performance is the same as a single nvme drive.

there are also some scripts to automate snapraid
Interesting solution, I will have to look into it. I do have a few questions. How does snapraid work as virtual machine storage? Can it be formatted to refs with dedupe enabled? Will it work on server 2019?

Thanks!

Jeff R.
 

Jeff Robertson

Active Member
Oct 18, 2016
429
115
43
Chico, CA
SM's current H11SSL-NC offers the LSI3008 RAID chip that supports up to 8 drives for your OS RAID. Their website only mentions "SAS" with the 3008 but I did inquire with SM's tech support with regards to SATA support and was informed it is supported and that 2 logical drives can be created (I generally build 1 or 2 RAID 10 drives of 4 disks each). I believe this is the only Rome MB of offering on-board RAID support. Although certainly not a requirement with the abundance of PCI-E slots available, one could easily add a RAID card of choice.

The other feature of the H11SSL-NC is the addition of 2 NVMe ports via mini-sas connectors which would work great for a U2 drive like Intel's Optane 905P, But there's no HW RAID capability for these 2 ports which leaves you to your options as noted. Additionally, there is the M2 port which could either be utilized to augment the OS or in conjunction with the other 2 NMVe storage drives.

One other consideration would be a HCI setup with a pair of these. I run a 2 node HCI with SM's X11SCL-LN4F and E-2176G processor. Storage Spaces Direct is running on 2x Optane 905p, 4x Intel 3700, and 8 Seagate 2.5 1TB drives and I easily see 300K IOPs. The 2 limiting factors with my current setup is Core 0 maxes out trying to manage the I/O and the initial MB support of 64GB max RAM. Even so, I run 2 dozen VMs and the performance is snappier than I could ever hope for. There is a BIOS update to address both issues, just need time to make it happen.

My next HCI iteration will likely be the H11SSL-NC (or predecessor), Epyc Rome 16C CPU and benefit from cores and memory channels/bandwidth. Though I am curious to see how the Rome CPU handles the I/O load...
Another S2D fan I see. I am actually breaking down and selling a two node S2D cluster (e5-2699V4 CPUs) and going stand alone. I am having two issues with S2D that have me frustrated enough to call it quits. First I am getting terrible write speed since the drives aren't being detected as having PLP even though they are all on the supported list (LSI 3008 in IT mode with 4x Toshiba HK4 drives plus 2x Intel P3700 for caching per node). This leaves me with random write speeds in the single digits per VM (like 6MB/sec bad). Second if I shut node 1 down node 2 takes over. If I shut node 2 down the whole thing crashes no matter what I do. I really like the idea of S2D and I'm sure it works great in larger clusters but I've just had no luck with it.

I figure I'll get a 24 core epyc as the main system and a e-2286g 8 core for a backup low power system and call it a day. More than enough juice for a few dozen VMs and I don't have to worry about S2D and its performance problems.
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
Interesting solution, I will have to look into it. I do have a few questions. How does snapraid work as virtual machine storage? Can it be formatted to refs with dedupe enabled? Will it work on server 2019?

Thanks!

Jeff R.
snapraid.it

virtual machine storage? you mean snapraid inside virtual machine? or virtual machine running on snapraid?

snapraid is software raid, it is not file system.

SnapRAID

ReFS is not supported. NTFS is well supported.
 

Philmatic

Active Member
Sep 15, 2011
124
85
28
Don’t Epyc motherboards support motherboard based RAID for NVMe like the consumer side does? Something similar to VRoC?

I’m currently running a 4 way RAID 0 array with 4 NVMe SSDs on an X570 board, 12GB reads and 5GB writes.
 

Jeff Robertson

Active Member
Oct 18, 2016
429
115
43
Chico, CA
Don’t Epyc motherboards support motherboard based RAID for NVMe like the consumer side does? Something similar to VRoC?

I’m currently running a 4 way RAID 0 array with 4 NVMe SSDs on an X570 board, 12GB reads and 5GB writes.
I *think* that is limited to consumer boards, someone correct me if I"m wrong. To my knowledge epyc systems don't have that built in which is a shame.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
And one of the main reason enterprise SAS lives on... because of hardware raid.

I have yet to see any satisfactory raid for a windows OS on NVMe. (Linux there is options, in theory windows also but I don’t think it works well enough, not robust enough in my limited testing, or performing)
 

edge

Active Member
Apr 22, 2013
203
71
28
I have a hard time believing there is much future in hardware raid with NVMe. Hardware raid is chip based which invariably puts it on the PCIe lanes and BAM - bandwidth just died. The architecture is flawed.
 
  • Like
Reactions: TerryPhillips

Philmatic

Active Member
Sep 15, 2011
124
85
28
I don’t think anyone is truly looking for hardware raid for NVMe right? That wouldn’t make any sense, we just want driver supported soft raid inside UEFI like VROC or AMD Ryzen/TR softraid.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I have a hard time believing there is much future in hardware raid with NVMe. Hardware raid is chip based which invariably puts it on the PCIe lanes and BAM - bandwidth just died. The architecture is flawed.
Sorry was not suggesting there was any future for NVMe hardware raid, was just pointing out why SAS especially for boot disks still has a market.
 

edge

Active Member
Apr 22, 2013
203
71
28
Sorry was not suggesting there was any future for NVMe hardware raid, was just pointing out why SAS especially for boot disks still has a market.
No need to apologize, I didn't mean to come off that way. I agree that SAS isn't going away anytime soon.