1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

NVMe RAID 5/6??? Anyone running it?

Discussion in 'RAID Controllers and Host Bus Adapters' started by zer0gravity, Oct 9, 2017.

  1. zer0gravity

    zer0gravity Member

    Joined:
    Feb 15, 2013
    Messages:
    149
    Likes Received:
    17
    Just seeing if anyone is running a NVMe raid 5/6 and if so what hardware are they using. I've only used storage spaces / ZFS and never went down the hardware or controller side of things.

    Just seeing if any improvements have been made.

    Thanks!
     
    #1
  2. i386

    i386 Active Member

    Joined:
    Mar 18, 2016
    Messages:
    725
    Likes Received:
    149
    Are there even controller with nvme parity raid support?? The last time I looked at the broadcoma site they only supported raid 1/0/10.
     
    #2
  3. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,437
    Likes Received:
    165
    Can't you do some kind of nvme raid on the new intel consumer boards? Not really what you're looking for I guess but the closest thing I know off.

    Computex 2017: Intel unleashes NVMe RAID for X299
    "Support for RAID 0 NVMe arrays is free, but you have to shell out $99 for a physical VROC key to plug into the header to unlock RAID 1 and RAID 10. For RAID 5, there's a more expensive key (we heard both $199 and $299 are possible). These keys aren't being channeled through motherboard manufacturers, so as far as we know Intel will be selling them directly."
     
    #3
  4. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    10,044
    Likes Received:
    3,315
    @Rand_ Intel has the VROC feature on Intel Xeon Scalable.

    Most folks are simply using RAID 1/ 10. Larger deployments seeing different erasure coding schemes across clustered nodes.

    If you want to try it, you can setup md raid or ZFS with NVMe.
     
    #4
  5. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,437
    Likes Received:
    165
    Ah, yes could have surmised as much. Have not looked into SP too much due to having upgraded to v4 just recently.
     
    #5
  6. acquacow

    acquacow Active Member

    Joined:
    Feb 15, 2017
    Messages:
    145
    Likes Received:
    54
    Grab whatever nvme you want and put mdraid on top to get your raid 5/6. It's fast, reliable, and portable to other systems/etc...
     
    #6
    TedB likes this.
  7. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    576
    Likes Received:
    131
    Not yet, but looking at doing a vSAN deployment with it.:)
     
    #7
  8. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,437
    Likes Received:
    165
    vsan on raid'ed nvme? why? Or rather whats the target setup ;)
     
    #8
  9. acquacow

    acquacow Active Member

    Joined:
    Feb 15, 2017
    Messages:
    145
    Likes Received:
    54
    Don't need raid with vSAN... vSAN gives you replication, no need to put a controller in the way to add cost and eat performance.

    Sent from my XT1650 using Tapatalk
     
    #9
  10. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    1,412
    Likes Received:
    194
    VSAN eats the performance for you... not a true statement at all but vSAN aims for equal shared performance for VM’s, don’t expect to see amazing speeds from a single VM
     
    #10
  11. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    576
    Likes Received:
    131
  12. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,437
    Likes Received:
    165
    And thats not what you are looking for? :)
    Still looking for your use case:)

    Btw, vsan is at 6.6 since ESX 6.5U1, so some new features are available if the old 6.2 set did not match requirements.
     
    #12
  13. BullCreek

    BullCreek New Member

    Joined:
    Jan 5, 2016
    Messages:
    1
    Likes Received:
    2
    Is it only me - or would others like to see Patrick do a review on that new Tyan EPYC server he has been doing in EPYC model P testing on not from the perspective of EPYC per se - but as a very fast and potentially affordable all flash shared storage server - with specific emphasis on the following:

    1. How those 24 nvme drives that are directly connected perform in different configurations on different OSes.
    2. Tyan doesn't seem to have any information on the OCP mezzanine slot for network - does the Intel X710-DA2 OCP card work - or what do they provide?
    3. Could you try it with OmniOS CE - Phoronix did a write up recently where they did some benchmarks (mostly CPU) on this server but they couldn't get Open Indiana to see the nvme drives - probably because they are 1.2 and you need to edit nvme.conf to make that work.
    4. Will the board run ESXI 6.5u1 and allow you pass the 24 drives thru?

    I'm know Patrick is super busy with all the new goodies he has but inquiring minds want to know when you get time!
     
    #13
    TedB and cheezehead like this.
  14. acquacow

    acquacow Active Member

    Joined:
    Feb 15, 2017
    Messages:
    145
    Likes Received:
    54
    I've only built all flash vsans, they are pretty fast.

    Sent from my XT1650 using Tapatalk
     
    #14
  15. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,437
    Likes Received:
    165
    With how many nodes/diskgroups?
     
    #15
  16. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    1,412
    Likes Received:
    194
    Like this... 10-12 nodes, 2 disk groups per node.
    Each group ~1tb cache, 5 x 1tb capacity.

    Works well point remain is do not expect brilliant performance from all that for 1vm, but will run 100+vm just as well
     
    #16
    Rand__ likes this.
  17. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,437
    Likes Received:
    165
    Yep, I can totally agree with this statement:)
     
    #17
  18. funkywizard

    funkywizard mmm.... bandwidth.

    Joined:
    Jan 15, 2017
    Messages:
    93
    Likes Received:
    12
    My initial testing with mdadm software raid 5 on a dual E5 with 3x 512gb 960 Pro drives was terrible. Lower performance than a single drive by itself. Switched to software raid 1 (again, mdadm) and that went much better. I might have seen better results with tweaking the mdadm stripe cache, or looking at possible issues with having the 3 drives on pcie on two separate cpus (i.e. issues with QPI speeds), but I didn't get around to testing that.
     
    #18
Similar Threads: NVMe RAID
Forum Title Date
RAID Controllers and Host Bus Adapters 4X NVME Drives, will it bottleneck a 6G Raid Controller? Mar 30, 2017
RAID Controllers and Host Bus Adapters RAID 5 / 6 NVMe M.2 PCIe Controller Card Mar 8, 2017
RAID Controllers and Host Bus Adapters Highpoint has done it! RocketRAID 3800A Series NVMe RAID Host Bus Aug 8, 2016
RAID Controllers and Host Bus Adapters NVMe/SAS/SATA IOC and RAID Storage Controllers Now Sampling Mar 1, 2016
RAID Controllers and Host Bus Adapters PCIe NVMe HBA FYI Apr 29, 2017

Share This Page