2x NVMe in RAID 1 on a Consumer Motherboard

Suggested Data Drive Arrangement

  • 3x or 4x SATA in RAID 5 with onboard Controller

    Votes: 0 0.0%
  • Other

    Votes: 0 0.0%

  • Total voters
    2
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AndrewJacksonZA

New Member
Feb 16, 2020
3
0
1
South Africa
twitter.com
Hi

First post here, apologies if this is in the wrong section, but it seems to span three categories: Motherboards, RAID controllers, and SSDs.

We're trying out Google's approach of buying a consumer grade machine as a replacement for a ten year old Xeon server as we need the maximum possible grunt from an 8-core CPU that we can get due to software licencing reasons, and the Xeons and EPYCs with 8 cores just clock too low.

We're going to be running Windows Server 2019 on an i9-9900K on a MSI Mag Z390 Tomahawk with 128GB of RAM (4x 32GB of Corsair Vengeance LPX 2666 C16) and clocking it at 4.9 or 5GHz.

I've specified 1x Samsung 860 EVO SATA as the OS drive, and 2x Samsung 970 Evo Plus 1TB NVMe drives in RAID 1 as defined in the BIOS (not Windows) as the data drive. However, I'm having second thoughts and have some questions regarding how RAID might be handled on a consumer board:
  1. If one of the NVMEs fail, because the RAID 1 is setup in the BIOS, should the one remaining drive carry on until we can replace the dead one? On server hardware, if a drive dies, one walks into the server room, pulls the drive from the server's chassis, plugs in a new one, and the drive backplane takes care of it, all without downtime. I'm not yet sure how to handle consumer motherboards like this.
  2. We're migrating from 1x 5400RPM HDDs in RAID 1 so ANY boost in performance would be VERY welcome. Given that, for more "server" like RAID behaviour in terms of RAID array reconstruction, perhaps an Intel RS3WC080 RAID card and changing to SATA SSDs in RAID 5? In terms of TB written, it appears that 70TB/year might be it, and drives like the Samsung 860 EVO 1TB SATA have a lifespan of 600TBW. The MSI Mag Z390 Tomahawk's specs say that it supports RAID 5 over SATA, so I'm not sure if the Intel RAID controller add in card is just going to be a waste of money.

In case this matters, we're going to be running the CPU at full tilt every 30 minutes for about 10 minutes between 05:00 - 22:00, and writing to the drives at the same time. Reading from the drives will typically occur during those same intensive 10 minute slots, and then less intesive reads scattered throughout the day.

I'm concerned that the motherboard's chipset won't have the grunt to compute the parity needed for SSDs in RAID 5.

Thank you
Andrew
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Hi, welcome to the forum :)

Where is Google suggesting consumer SSD, NVME, and consumer CPUs in Enterprise?
I believe back in the day (15+ years ago) they used some, but in recent years I don't believe this is the case, and would find it hard to believe any enterprise would ever use consumer SSD or NVME for various reasons.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
If you explain the use case I'm sure the community can help guide you :)
 

AndrewJacksonZA

New Member
Feb 16, 2020
3
0
1
South Africa
twitter.com
Thanks. :)

The program in question is Tableau Server.

The use case for the data drive, which is the one that is going to be RAIDed, is writing perhaps 5GB of data every 30 minutes to the drive between 05:00 - 22:00 every day. Reading from the drives will typically occur during those same periods, and then random reads scattered throughout the day.

Backups are currently happening twice a week, and that should be increasing to every second day with the upgraded machine.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
... In case this matters, we're going to be running the CPU at full tilt every 30 minutes for about 10 minutes between 05:00 - 22:00, and writing to the drives at the same time. ...
And no ECC memory ??
[As such, sweating over RAID1 vs 5 is like "putting the cart before the horse". No? [Or, maybe it's government work? (ie, close enough for)] ]
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Thanks. :)

The program in question is Tableau Server.

The use case for the data drive, which is the one that is going to be RAIDed, is writing perhaps 5GB of data every 30 minutes to the drive between 05:00 - 22:00 every day. Reading from the drives will typically occur during those same periods, and then random reads scattered throughout the day.

Backups are currently happening twice a week, and that should be increasing to every second day with the upgraded machine.
I'm not familiar with your specific software myself, but will share what I'd do on a budget for quality build vs. only consumer. I don't know if you NEED ECC RAM for your usage. I wouldn't trust business data with software-raid in BIOS, but that's me. ECC RAM is cheap, but not supported with your CPU, you'll end up paying more for desktop RAM. (32GBx2 = $200 for DDR4 RDIMM ECC)

I suggest (and go this route myself) is to buy new old stock Enterprise NVME or SSD on ebay, or slightly used on ebay. You can get new 1TB Intel Enterprise NVME for $100, 2TB for $200. These aren't write intensive drives but you can run 4x of them with storage spaces in raid10-type configuration. (P3500 > write performance than P4500 1TB, getting into 2TB it probably doesn't matter which... whichever is cheaper).

I would also suggest going with a server chassis vs. desktop\home style, so you can still get hot-swap, etc. this may not be viable if you don't have a room to stash it away from people due to sound ;) or are trying to re-use something. If you're re-using old dekstop case you should still update\replace the power supply.

If you don't want to use storage spaces with NVME you can still use it with SATA drives and run 6 instead of 4 for more performance. I think most would say RAID5 isn't to be used anymore, I use raid6 type storage and raid10. I would also use a HW Raid Controller over BIOS Raid. Storage Spaces I think is good too, but again I'm not a windows person. It would keep costs lower than having more hardware controllers.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
@AndrewJacksonZA Looks like you are in South Africa. Here in the US and generally Europe, we have a lot of options for lightly used server pull stuff. I'm not sure about the supply of server pull components in South Africa, but my guess would be not much.

On the topic of what type of RAID to use, I'd use soft RAID (such as Storage Spaces) over BIOS RAID over the motherboard's ports or a HW RAID controller. You'd have more flexibility later if a the motherboard (or controller) fails to migrate to a new platform.

With using the consumer motherboard's M.2 slots, a consideration would be the difficulty of changing the NVMe drives. Unlike most server motherboards, consumer boards tend to have some, or all of the M.2 slots under the PCIe slots, which would require removing the whatever AIC are in the PCIe slots. In any case if my memory serves me correctly, M.2 drives are not hot swappable, so the whole machine must be powered down. U.2 drives may be hotswapped (including M.2 drives in U.2 converters if I'm not mistaken, though this introduces expense when you can just buy a U.2 drive to begin with).