VMWare vSAN, quad M.2 NVME + 1 cache vs SATA SSDs?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
I'm wanting to build a 4 node VMWare cluster and plan to have it be completely converged and relatively low power with SuperMicro 515 chassis, these are short depth and 1U with 2 full height x16 slots with WIO riser.

My original plan was to just use some HBA like an LSI 2008/3008 with most likely 4 Intel DCS drives, not sure what capacity but just your standard 6 Gbps SATA enterprise SSDs, with an NVMe either U.2 or M.2 as a caching drive in each node connected with the spare 8x PCIe slot.

But what if I get one of those quad M.2 NVMe add in cards and just do a full NVMe config across all 4 nodes and not have to run any cabling for drives? I really am looking to get as much performance as can from such a small config.

Anyone have experience using PCIe SSDs with vSAN or using something like this https://www.amazon.com/Hyper-M-2-x1...id=1530841291&sr=8-18&keywords=pcie+nvme+card with it?

edit: I just realized that M.2 card uses some VROC tech (virtual raid on chip), but ideally I would just find something that presents them directly to OS.
 

rune-san

Member
Feb 7, 2014
81
18
8
VROC is an Intel proprietary feature for creating Software RAIDs, but one of the requirements for the tech is separately accessible devices. Bifurcation works on the ASUS Hyper Card you linked, as well as the ASRock Ultra M.2 carrier. Either will work with a board that supports bifurcation and allow you to present the individual devices to the OS.

My only problem with M.2 is that capacity is usually limited when combined with trying to find affordable options.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Welcome to the jungle - of getting performance out of vSan;)

There are man threads here that describe issues with that (others and mine), but here are the key points:

1. Performance depends on your workload (I know, thats a shocker;))
2. VSan is optimized for many users not few (and will limit individual performance*)
3. Performance (write) depends only on number and speed of the cache device (and associated write policies)

*Disclaimer: My personal experience, no hard data, no official VMWare statements

Besides that I would very much recommend Optane drives as cache b/c they really work best in a small setup. I have run 6 write device P3700 based vsan (3 boxes , 2 diskgroups, 2 writers per object) and it was not convincing.
But my workload is basically QD1 so yours might vary.

In general using an multi m2 setup should work if you get them as individual drives. You might get a heat/throttling issue and m.2 drives might not have the speed/capacity of the U2/pcie drives but if its more convenient...

But its not entirely clear to me what slots you have free now, you say 2 x16 slots are available but only have 1 x8 free?
 

lukelloyd1985

New Member
Apr 18, 2019
3
1
1
I'm wanting to build a 4 node VMWare cluster and plan to have it be completely converged and relatively low power with SuperMicro 515 chassis, these are short depth and 1U with 2 full height x16 slots with WIO riser.

My original plan was to just use some HBA like an LSI 2008/3008 with most likely 4 Intel DCS drives, not sure what capacity but just your standard 6 Gbps SATA enterprise SSDs, with an NVMe either U.2 or M.2 as a caching drive in each node connected with the spare 8x PCIe slot.

But what if I get one of those quad M.2 NVMe add in cards and just do a full NVMe config across all 4 nodes and not have to run any cabling for drives? I really am looking to get as much performance as can from such a small config.

Anyone have experience using PCIe SSDs with vSAN or using something like this https://www.amazon.com/Hyper-M-2-x1...id=1530841291&sr=8-18&keywords=pcie+nvme+card with it?

edit: I just realized that M.2 card uses some VROC tech (virtual raid on chip), but ideally I would just find something that presents them directly to OS.
Did you get anywhere with using the Asus Hyper card?

I have one along with 4x Crucial CT500 M.2 NVMe drives but I can't seem to get them to display in vSphere/ESXi :(
 

lukelloyd1985

New Member
Apr 18, 2019
3
1
1
Nah, I gave up on that a long time ago.
Fair enough.

I managed to get it working. I seemed to have to stick with a brand of NVMe drive that was on the VMware compatibility list so I went with the Samsung 970 Evo (technically only the 960 is on the list but the 970 works too).

I now have 3 (as I only got 3) NVMe drives showing up in ESXi :)
 
  • Like
Reactions: BennyT

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Fair enough.

I managed to get it working. I seemed to have to stick with a brand of NVMe drive that was on the VMware compatibility list so I went with the Samsung 970 Evo (technically only the 960 is on the list but the 970 works too).

I now have 3 (as I only got 3) NVMe drives showing up in ESXi :)
What board are you using?