VMWare vSAN, quad M.2 NVME + 1 cache vs SATA SSDs?

Discussion in 'VMware, VirtualBox, Citrix' started by frogtech, Jul 5, 2018.

  1. frogtech

    frogtech Well-Known Member

    Jan 4, 2016
    Likes Received:
    I'm wanting to build a 4 node VMWare cluster and plan to have it be completely converged and relatively low power with SuperMicro 515 chassis, these are short depth and 1U with 2 full height x16 slots with WIO riser.

    My original plan was to just use some HBA like an LSI 2008/3008 with most likely 4 Intel DCS drives, not sure what capacity but just your standard 6 Gbps SATA enterprise SSDs, with an NVMe either U.2 or M.2 as a caching drive in each node connected with the spare 8x PCIe slot.

    But what if I get one of those quad M.2 NVMe add in cards and just do a full NVMe config across all 4 nodes and not have to run any cabling for drives? I really am looking to get as much performance as can from such a small config.

    Anyone have experience using PCIe SSDs with vSAN or using something like this https://www.amazon.com/Hyper-M-2-x1...id=1530841291&sr=8-18&keywords=pcie+nvme+card with it?

    edit: I just realized that M.2 card uses some VROC tech (virtual raid on chip), but ideally I would just find something that presents them directly to OS.
  2. rune-san

    rune-san Member

    Feb 7, 2014
    Likes Received:
    VROC is an Intel proprietary feature for creating Software RAIDs, but one of the requirements for the tech is separately accessible devices. Bifurcation works on the ASUS Hyper Card you linked, as well as the ASRock Ultra M.2 carrier. Either will work with a board that supports bifurcation and allow you to present the individual devices to the OS.

    My only problem with M.2 is that capacity is usually limited when combined with trying to find affordable options.
  3. Rand__

    Rand__ Well-Known Member

    Mar 6, 2014
    Likes Received:
    Welcome to the jungle - of getting performance out of vSan;)

    There are man threads here that describe issues with that (others and mine), but here are the key points:

    1. Performance depends on your workload (I know, thats a shocker;))
    2. VSan is optimized for many users not few (and will limit individual performance*)
    3. Performance (write) depends only on number and speed of the cache device (and associated write policies)

    *Disclaimer: My personal experience, no hard data, no official VMWare statements

    Besides that I would very much recommend Optane drives as cache b/c they really work best in a small setup. I have run 6 write device P3700 based vsan (3 boxes , 2 diskgroups, 2 writers per object) and it was not convincing.
    But my workload is basically QD1 so yours might vary.

    In general using an multi m2 setup should work if you get them as individual drives. You might get a heat/throttling issue and m.2 drives might not have the speed/capacity of the U2/pcie drives but if its more convenient...

    But its not entirely clear to me what slots you have free now, you say 2 x16 slots are available but only have 1 x8 free?
Similar Threads: VMWare vSAN
Forum Title Date
VMware, VirtualBox, Citrix VMware vSAN Performance :-( Apr 4, 2018
VMware, VirtualBox, Citrix Vmware VSAN nodes Feb 9, 2016
VMware, VirtualBox, Citrix VMware vCenter, vSphere, vSAN EVALExperience - help with 6.0 setup Mar 14, 2015
VMware, VirtualBox, Citrix VMWare and vSAN licenses Oct 6, 2014
VMware, VirtualBox, Citrix HELP vmware vrealize network insight KEY? Jan 9, 2019

Share This Page