ESXi and x570 NVME RAID

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nutsnax

Active Member
Nov 6, 2014
240
90
28
113
Looking at getting a x570 motherboard for my 3900x and want to make use of the on-board NVME raid functions. Can or will ESXi recognize a x570 NVME RAID volume or will passthrough to a VM be required to make it work?

edit: I won't be booting ESXi from this volume; it would be only for database use.

Thanks!
 

jdnz

Member
Apr 29, 2021
77
19
8
I would imagine it'd work, the hypervisor would see it as one logical drive probably.
amd’s raid is a soft-raid system like the intel Rst - it relies on an operating system driver, and esxi has never had much support for that ( exceptions being things like the hpe low-end servers with their b140i controller where hpe itself did esxi drivers - though a lot of that lost support with esxi 7)

esxi mainly requires a hardware raid controller ( and with esxi 7 a relatively recent one )
 
Last edited:

nutsnax

Active Member
Nov 6, 2014
240
90
28
113
amd’s raid is a soft-raid system like the intel Rst - it relies on an operating system driver, and esxi has never bothered supporting that, it requires a hardware raid controller ( and with esxi 7 a relatively recent one )
So this would require pass-through to a VM? I just want to make sure I know what I'm getting into before I proceed :)
 
Last edited:

jdnz

Member
Apr 29, 2021
77
19
8
So this would require pass-through to a VM? I just want to make sure I know what I'm getting into before I proceed :)
you should be able to pass-thru the nvme - but remember you pass-thru the ENTIRE nvme - so if you're going to pass-thru BOTH the x570's nvmes then that means esxi itself and the guest OS images will need to be one something apart from NVME ( sata ssd etc ).

Also, since AMD's raid drivers are windows that means the guest OS can only be windows

I've actually got a 3900x/x570 setup running esxi7 in a 6-drive rackmount case at work - we just use the nvme for esxi and guest OS images ( main thing running on it is our zoneminder setup, the spinning rust is used for video footage as it's bulky but not that demanding on thruput )
 

nutsnax

Active Member
Nov 6, 2014
240
90
28
113
you should be able to pass-thru the nvme - but remember you pass-thru the ENTIRE nvme - so if you're going to pass-thru BOTH the x570's nvmes then that means esxi itself and the guest OS images will need to be one something apart from NVME ( sata ssd etc ).

Also, since AMD's raid drivers are windows that means the guest OS can only be windows

I've actually got a 3900x/x570 setup running esxi7 in a 6-drive rackmount case at work - we just use the nvme for esxi and guest OS images ( main thing running on it is our zoneminder setup, the spinning rust is used for video footage as it's bulky but not that demanding on thruput )
Thanks for that info - reading more into this, what I might try is to see if I can go ahead and register the drives/datastores in ESXi, thin-provision all of the drives and then add those drives to my ubuntu host....

then mdadm the thin-provisioned disks once added to the virtual machine.

I ordered four samsung PM9a1 drives and I have a quad-card already so hopefully this will do the trick.
 

jdnz

Member
Apr 29, 2021
77
19
8
Thanks for that info - reading more into this, what I might try is to see if I can go ahead and register the drives/datastores in ESXi, thin-provision all of the drives and then add those drives to my ubuntu host....

then mdadm the thin-provisioned disks once added to the virtual machine.

I ordered four samsung PM9a1 drives and I have a quad-card already so hopefully this will do the trick.
if you want decent performance inside the ubuntu host you'll want to thick-provision - too much overhead if you thin-provision. Even then you'll still take a performance hit compared to passing thru the pcie device - and if you're raiding 4 nvmes I'm guessing you must have something that really hammers the array to need that much speed!

Does the quad card you've ordered just use bifurcation - or is it a highpoint?

On our systems where we need massive IOPS we go for a u.2 (pm9a3) with a suitable retimer ( supermicro aoc-slg4-4e4t ) - mainly cos we want to be able to hot-swap the nvme's - taking the host down to pull it out of the rack and get at an internal card isn't an option
 

nutsnax

Active Member
Nov 6, 2014
240
90
28
113
if you want decent performance inside the ubuntu host you'll want to thick-provision - too much overhead if you thin-provision. Even then you'll still take a performance hit compared to passing thru the pcie device - and if you're raiding 4 nvmes I'm guessing you must have something that really hammers the array to need that much speed!

Does the quad card you've ordered just use bifurcation - or is it a highpoint?

On our systems where we need massive IOPS we go for a u.2 (pm9a3) with a suitable retimer ( supermicro aoc-slg4-4e4t ) - mainly cos we want to be able to hot-swap the nvme's - taking the host down to pull it out of the rack and get at an internal card isn't an option
My quad-card is just bifurcation. And you are correct I use massive IOPS that smashes the drive. This isn't mission-critical just something to hopefully make it so that I don't have to stay up until 4AM trying to meet a crazy deadline like I've been doing for the past few weeks.

My budget is most likely poverty-tier compared to yours. I just ordered a used x570 asus board and am going to repurpose my desktop 3900x for this because I don't have the budget and sell the old parts to help offset the cost.

Also thanks for the heads-up on the performance hit from having ESXi do anything with those drives other than pass them through. I'd forgotten about that.