VM with passthrough "freezes" entire ESXi box when shutdown/rebooting guest

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

epicurean

Active Member
Sep 29, 2014
785
80
28
Hello epicurean,

Before implementing my version of a solution I recommend following the instructions given by "mvrk" in post number four of the following link.

mvrk's solution is more elegant and easier to implement. If for some reason mvrk's solution doesn't work for you I'll provide further assistance on my version.

Please let the forum know how it goes.

Click here if you'd like to read a little more about the development experience I had on my way to passing through a GPU. The capability does work even if the process is anything but smooth. My server is now passing through 2 x NVIDIA 2000 GPUs to two Windows 7 VMs without much perceptible trouble.

Regards,
Jake
Thanks Jake,
I don't quite understand what mvrk was doing, and unsure if putting those exact entrees is appropriate for my motherboard and AMD cards. Is it a general solution or I need to find the specific addresses to my AMD cards?

Will definately give your solution a good read!
 

mmolinari

New Member
Apr 18, 2017
1
0
1
49
I was experiencing the same issue with a GTX 980, and apparently what mvrk suggests almost worked for me: changing "bridge" to "link" fixed the freeze, but then the VM wouldn't boot anymore after the first time, until I rebooted the host (the VM logs mention a panic). Changing it to "d3d0" seems to work: still no freezes, and I can power off and on the Windows VM at will.
If you try this, make sure to reboot the host after changing /etc/vmware/passthru.map.

Thanks a lot for this thread.
 

ridney

Member
Dec 8, 2015
77
33
18
Singapore
i just added a passthrough gpu to my windows 10 vm and i'm experiencing the same freezing upon shutdown of vm. how are we supposed to change the

1002 ffff link false to /etc/vmware/passthru.map?

update: nevermind. i used winscp to ssh and edit the file
 
Last edited:

Khalendar

New Member
Aug 18, 2015
2
0
1
i have a gtx 1070 and gtx 1050 ti passthrough to 2 different vm's in esxi 6.5 with no major issues. the only config change made was to hide the hypervisor from nvidia drivers. originally tried passthrough with a radeon pro wx 4100 but that had major problems with freezing the entire esxi host on vm reboots.

on another note, i also had these freezing issues when passing through a usb 3.0 card that used a fresco logic controller. issues went away after switching to a asmedia controller.

having a intel ssd 750 nvme passthrough to the vm along with the nvidia gpu and usb 3.0 also caused stability issues in the vm (nvidia driver crashes) so i just don't have the ssd passthrough and it's been stable since.
 

chune

Member
Oct 28, 2013
119
23
18
still present in 6.5 u3. This worked great to get my vid cards and USB going, but it still locks up whenever i try to passthrough my LSI2008 controller. I tried the "d3d0" and "link" settings but it PSODs on them both when shutting down a vm.

# ATI
1002 6798 link false
# ETRON
15AD 0779 link false
1B6F 7052 link false
#ASMEDIA
1B21 1042 link false
#LSI 2008
1000 0072 d3d0 default
 

fishtacos

New Member
Jun 8, 2017
23
13
3
I was experiencing the same issue with a GTX 980, and apparently what mvrk suggests almost worked for me: changing "bridge" to "link" fixed the freeze, but then the VM wouldn't boot anymore after the first time, until I rebooted the host (the VM logs mention a panic). Changing it to "d3d0" seems to work: still no freezes, and I can power off and on the Windows VM at will.
If you try this, make sure to reboot the host after changing /etc/vmware/passthru.map.

Thanks a lot for this thread.
This was exactly my experience, but with an AMD RX 550. Changing to d3d0 was was made the set up workable, being able to install proper (new) driver and reboot without having to reboot host as well.

At this point I have 1 GPU in each esxi host passed through for mining - I can't seem to get a stable 2 GPU-per-host configuration going, but that's my next project, I suppose.
 

chune

Member
Oct 28, 2013
119
23
18
I have 5 RX570 8gb passed through to a Xbuntu VM running on esxi 6.0 (u3 i think?). Its totally stable once it boots, but i have to reset the vm/host a few times for everything to boot properly initially. Not sure if its a passthrough issue or an xbuntu issue. The key with the 8gb cards was to use a uefi bios for the VM and add the following line to the vmx:
pciPassthru.use64bitMMIO="TRUE"
as outlined here:
VMware Knowledge Base

I tried the same thing on win10 and as soon as i enabled a 2nd GPU it would crash. I would be interested it know if anyone gets that working
 

Dean

Member
Jun 18, 2015
116
10
18
48
I have 5 RX570 8gb passed through to a Xbuntu VM running on esxi 6.0 (u3 i think?). Its totally stable once it boots, but i have to reset the vm/host a few times for everything to boot properly initially. Not sure if its a passthrough issue or an xbuntu issue. The key with the 8gb cards was to use a uefi bios for the VM and add the following line to the vmx:
pciPassthru.use64bitMMIO="TRUE"
as outlined here:
VMware Knowledge Base

I tried the same thing on win10 and as soon as i enabled a 2nd GPU it would crash. I would be interested it know if anyone gets that working
Was this enabling a 2nd GPU, as in to a separate VM, or 2 GPU's in one VM?
 

chune

Member
Oct 28, 2013
119
23
18
Ok...my understanding is you can only assign one GPU per VM, unless you are using Nvidia Grid.

Sent from my Moto Z (2) using Tapatalk
Sounds like you are just starting down the GPU passthrough rabbit hole, there are quite a few other threads on this but to sum it up:

VDGA = aka gpu passthrough, you can load up a server with as many physical GPUs that it will post with (with above 4g decoding disabled) and pass each physical gpu through to a different (or same) VM. This works best with consumer AMD GPUs but Nvidia can be made to work with some tweaks.
VSGA = Need special cards like Nvidia grid k1/k2. You can carve out resources from a single physical GPU and assign them to multple VMs. No special licensing required.
vGPU = Need special cards like newer gen 2 Nvidia grid. You can carve out resources from a single physical GPU and assign them to multple VMs. Much better performance over gen 1 grid but you need special licensing per user from nvidia AND vmware.

Most people in this forum care about VDGA and thats what we are talking about in this thread
 

Dean

Member
Jun 18, 2015
116
10
18
48
Sounds like you are just starting down the GPU passthrough rabbit hole, there are quite a few other threads on this but to sum it up:

VDGA = aka gpu passthrough, you can load up a server with as many physical GPUs that it will post with (with above 4g decoding disabled) and pass each physical gpu through to a different (or same) VM. This works best with consumer AMD GPUs but Nvidia can be made to work with some tweaks.
VSGA = Need special cards like Nvidia grid k1/k2. You can carve out resources from a single physical GPU and assign them to multple VMs. No special licensing required.
vGPU = Need special cards like newer gen 2 Nvidia grid. You can carve out resources from a single physical GPU and assign them to multple VMs. Much better performance over gen 1 grid but you need special licensing per user from nvidia AND vmware.

Most people in this forum care about VDGA and thats what we are talking about in this thread
Ok... Sounds like your on the right path.

Sent from my Moto Z (2) using Tapatalk