NVMe RAID 5/6??? Anyone running it?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zer0gravity

Active Member
Feb 15, 2013
427
82
28
Just seeing if anyone is running a NVMe raid 5/6 and if so what hardware are they using. I've only used storage spaces / ZFS and never went down the hardware or controller side of things.

Just seeing if any improvements have been made.

Thanks!
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Are there even controller with nvme parity raid support?? The last time I looked at the broadcoma site they only supported raid 1/0/10.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Can't you do some kind of nvme raid on the new intel consumer boards? Not really what you're looking for I guess but the closest thing I know off.

Computex 2017: Intel unleashes NVMe RAID for X299
"Support for RAID 0 NVMe arrays is free, but you have to shell out $99 for a physical VROC key to plug into the header to unlock RAID 1 and RAID 10. For RAID 5, there's a more expensive key (we heard both $199 and $299 are possible). These keys aren't being channeled through motherboard manufacturers, so as far as we know Intel will be selling them directly."
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Rand_ Intel has the VROC feature on Intel Xeon Scalable.

Most folks are simply using RAID 1/ 10. Larger deployments seeing different erasure coding schemes across clustered nodes.

If you want to try it, you can setup md raid or ZFS with NVMe.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Ah, yes could have surmised as much. Have not looked into SP too much due to having upgraded to v4 just recently.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Grab whatever nvme you want and put mdraid on top to get your raid 5/6. It's fast, reliable, and portable to other systems/etc...
 
  • Like
Reactions: TedB

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Don't need raid with vSAN... vSAN gives you replication, no need to put a controller in the way to add cost and eat performance.

Sent from my XT1650 using Tapatalk
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
VSAN eats the performance for you... not a true statement at all but vSAN aims for equal shared performance for VM’s, don’t expect to see amazing speeds from a single VM
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
And thats not what you are looking for? :)
Still looking for your use case:)

Btw, vsan is at 6.6 since ESX 6.5U1, so some new features are available if the old 6.2 set did not match requirements.
 

BullCreek

New Member
Jan 5, 2016
18
6
3
55
Is it only me - or would others like to see Patrick do a review on that new Tyan EPYC server he has been doing in EPYC model P testing on not from the perspective of EPYC per se - but as a very fast and potentially affordable all flash shared storage server - with specific emphasis on the following:

1. How those 24 nvme drives that are directly connected perform in different configurations on different OSes.
2. Tyan doesn't seem to have any information on the OCP mezzanine slot for network - does the Intel X710-DA2 OCP card work - or what do they provide?
3. Could you try it with OmniOS CE - Phoronix did a write up recently where they did some benchmarks (mostly CPU) on this server but they couldn't get Open Indiana to see the nvme drives - probably because they are 1.2 and you need to edit nvme.conf to make that work.
4. Will the board run ESXI 6.5u1 and allow you pass the 24 drives thru?

I'm know Patrick is super busy with all the new goodies he has but inquiring minds want to know when you get time!
 
  • Like
Reactions: TedB and cheezehead

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I've only built all flash vsans, they are pretty fast.

Sent from my XT1650 using Tapatalk
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Like this... 10-12 nodes, 2 disk groups per node.
Each group ~1tb cache, 5 x 1tb capacity.

Works well point remain is do not expect brilliant performance from all that for 1vm, but will run 100+vm just as well
 
  • Like
Reactions: Rand__

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
My initial testing with mdadm software raid 5 on a dual E5 with 3x 512gb 960 Pro drives was terrible. Lower performance than a single drive by itself. Switched to software raid 1 (again, mdadm) and that went much better. I might have seen better results with tweaking the mdadm stripe cache, or looking at possible issues with having the 3 drives on pcie on two separate cpus (i.e. issues with QPI speeds), but I didn't get around to testing that.