NVME Raid with Virtualization (ESXI)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andrew911tt

New Member
Feb 18, 2016
15
2
3
39
If I have a new server with 4, 6, 8 sata drives that is running ESXI I would usually connect them to RAID card and then I could present the OS with one large drive and that would allow me to cut that up in anyway I see fit for the VMs that I will be running. Using RAID 10 or 6 would allow me to have some amount of fault tolerance for drive failuers.

My question is how would I do the same thing for 4, 6, 8 NVME drives? It does not seem like there is a way to RAID NVME drives U.2, U.3 or M.2. All of the raid card would become bottlenecks for the throughput of the NVME drives.

I also understand if you are running many ESXI serves in a group you can use VSAN and then you want to have JBOD, but what if I am only running one server?

Thanks for any help
 

oneplane

Well-Known Member
Jul 23, 2021
846
485
63
Like gea wrote: pass the drives through to a ZFS virtual machine, and re-export them to the hypervisor. Best mix of flexibility, integrity, performance, reliability.
 
  • Like
Reactions: awedio

gea

Well-Known Member
Dec 31, 2010
3,186
1,202
113
DE
Just to add

Many NVMe are often "much faster than needed" with a huge hardware demand (4 lanes per NVMe) and a high complexity. In many cases 12/24G SAS especially in mpio mode 2x12G/2x24G is near to NVMe performance with the option to use dozens, even hundreds of disks without problems or complex settings.
 
  • Like
Reactions: ano

awedio

Active Member
Feb 24, 2012
776
225
43
Like gea wrote: pass the drives through to a ZFS virtual machine, and re-export them to the hypervisor. Best mix of flexibility, integrity, performance, reliability.
Doesn't this sound like the "old chicken & egg" problem?
 

gea

Well-Known Member
Dec 31, 2010
3,186
1,202
113
DE
More like the classic approach to use a dedicated (2nd server) NFS/iSCSI SAN appliance for ESXi.

Only difference is to virtualize the SAN appliance. You need to autostart the storage VM from local datastore first after ESXi power on, wait some time until the VM is up and the storage available for ESXi to power up other VMs from NFS/ZFS.

see All-In-One (ESXi with virtualized Solarish based ZFS-SAN in a box)
 
Last edited: