Troubleshooting GPU passthrough ESXi 6.5

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

superempie

Member
Sep 25, 2015
85
11
8
The Netherlands
Yeah, 6.7u3 works for me the best also. And indeed, reserve memory.
The svga.present param is something I had mixed experiences with in the past. Sometimes worked, sometimes not. The steps I described are the ones I follow when setting up a new Win VM.
hypervisor.cpuid.v0 shouldn't be needed now Nvidia removed the restriction for it, but doesn't hurt in my opinion.

Another long shot would be if the GTX1050 is connected through DisplayPort and you run into the DisplayPort bug.
 

Docop

Member
Jul 19, 2016
45
0
6
46
I can't tell for the gtx as previous version did need some line in the vm config if i remember. I run the Quadro lineup and are all direct compatible. Even with the new proxmox not working anymore, with prox 7.1 direct install, no update, that can give you some hint perhaps. you can install on usb to try out too.
 

sev

New Member
Jul 26, 2022
8
0
1
Yeah, 6.7u3 works for me the best also. And indeed, reserve memory.
The svga.present param is something I had mixed experiences with in the past. Sometimes worked, sometimes not. The steps I described are the ones I follow when setting up a new Win VM.
hypervisor.cpuid.v0 shouldn't be needed now Nvidia removed the restriction for it, but doesn't hurt in my opinion.

Another long shot would be if the GTX1050 is connected through DisplayPort and you run into the DisplayPort bug.
I've tried with both DVI and HDMI, no difference there.
 

ARNiTECT

Member
Jan 14, 2020
93
9
8
I initially had issues getting 2x GPUs + iGPU to pass through.
  • You could try an older host BIOS. I am stuck on an old BIOS for my Supermicro X11-SCA-F, as newer ones won't even boot with the 3 GPUs.
  • The order that the GPUs are set in BIOS made a difference for me: Load Optimised Defaults, Primary Display = PCI, Primary PEG = Slot 4, Primary PCI = Onboard, Internal Graphics = Enabled, Option ROM Video = UEFI.
  • When my primary GPU stops working, I always need to fully power off the host and do a cold boot; resetting the host doesn't work.
  • Toggling the primary GPU for passthrough in ESXi 7 was a nightmare, as toggling 1 of its 4 devices, sets the others off in a random active/disabled flicker, so after a lot of persistence I got all 4 active, as without all 4, I couldn't use the GPU in a VM; however, I only passthrough 2 devices (video & audio).
  • Add all of the GPU's devices into /etc/vmware/passthru.map
  • I went through a few headless ghost display emulator adapters in HDMI and DP until one worked.
  • I installed VNC before GPUs
 

sev

New Member
Jul 26, 2022
8
0
1
I initially had issues getting 2x GPUs + iGPU to pass through.
  • You could try an older host BIOS. I am stuck on an old BIOS for my Supermicro X11-SCA-F, as newer ones won't even boot with the 3 GPUs.
  • The order that the GPUs are set in BIOS made a difference for me: Load Optimised Defaults, Primary Display = PCI, Primary PEG = Slot 4, Primary PCI = Onboard, Internal Graphics = Enabled, Option ROM Video = UEFI.
  • When my primary GPU stops working, I always need to fully power off the host and do a cold boot; resetting the host doesn't work.
  • Toggling the primary GPU for passthrough in ESXi 7 was a nightmare, as toggling 1 of its 4 devices, sets the others off in a random active/disabled flicker, so after a lot of persistence I got all 4 active, as without all 4, I couldn't use the GPU in a VM; however, I only passthrough 2 devices (video & audio).
  • Add all of the GPU's devices into /etc/vmware/passthru.map
  • I went through a few headless ghost display emulator adapters in HDMI and DP until one worked.
  • I installed VNC before GPUs

So i edited the passthrough map and set the device ID in there along with the vendor ID and set it to D3d0. Following your advice, I did a complete shutdown and cold booted the machine, when it came back up, and I started the VM with the pass through, much to my surprise, I saw the windows logon screen on the monitor the GPU was attached to!

unfortunately, in pretty quick order, the monitor blue screened and I rebooteed/cold booted, tried a bunch of stuff and couldnt get it to work again.


So i'm using a Dell Precision 3620 and this machine is a Intel I7-6700 with a intel c236 chipset, I have updated the bios to the latest. I might try downgrading but I cannot set the slot the graphics card comes up on. Just which one is primary.
 
Dec 3, 2020
47
14
8
Hi all,

I am new to this topic and I would like to get my iGPU of a AMD Ryzen 7 Pro 4750G working in passthrough mode. I am using this CPU in a ASRock Rack x570D4U-2L2T mainboard.

IOMMU is activated in the BIOS and this is the result:

1689958986774.png

I have added all 7 0000:30:00.x devices to a test VM, exclusively allocated the momery to this VM but the VM wont start.

1689959066126.png


When I open the .vmx file of this VM, I can see that these 7 devices have a different IOMMU group address (0000:048:00.x instead of 0000:030:00.x which is shown in the GUI).

Whats the reason? Which id should I write into passthru.map ?

Is there a way to define own IOMMU groups in ESXi so I can seperate these AMD/ATI-devices from the rest?
 

Reefs

New Member
Oct 21, 2025
2
3
3
I appreciate that this is an old thread, but I note there doesn't appear to be an answer here.

It would appear that the reset on the PCI GPU is working, but after being reset, the host is noticing the hardware reset, and the PCI HotPlug is grabbing the GPU. It's the host locking the GPU, not the guest after all.

Disable PCIHotPlug:
esxcli system settings kernel set -s "enablePCIEHotplug" -v "FALSE"

Change the reset to default in the passthru.map
# NVIDIA
10de ffff default false

There was an article released, but it may not remain published with all the recent VMware changes

Thanks to all who contributed to this long thread; it helped me get to this answer. If anybody is still reading this, I hope it gives some closure.
 

ARNiTECT

Member
Jan 14, 2020
93
9
8
Disable PCIHotPlug:
esxcli system settings kernel set -s "enablePCIEHotplug" -v "FALSE"

Change the reset to default in the passthru.map
# NVIDIA
10de ffff default false
Thanks for posting.
I tried the above, verified it was setup, shut down host, rebooted host, but sadly it made no difference to my setup (using ESXi 7.0.2 - 18538813)
...back to using scripts at startup/shutdown of my Windows 11 VM to disable my A4000 Ampere gpu
 
Last edited:
  • Like
Reactions: AveryFreeman

Reefs

New Member
Oct 21, 2025
2
3
3
My setup is still 6.7 U3. I’m pleased to say the issue has not recurred since making the changes from my previous post.

The telling thing for me was the issue only occurred after the VM had booted and pass through had been successful once.

I could reboot the host without issue if the vm with the pass through graphics card had never been started.

Also, the VM would never boot a second time after shutdown and needed a host reboot to start the vm again.

Your issue may be different - but this is what led me to understand that the GPU was locked.
I had expected it to be the VM causing the lock, but it was silently the host.

Good Luck, hope you find your answer
 
  • Wow
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
428
58
28
44
Near Seattle
averyfreeman.com
My setup is still 6.7 U3. I’m pleased to say the issue has not recurred since making the changes from my previous post.
Wow, I'm shocked

Reading this thread back a bit was an astounding blast from the past. Physical passthrough isn't really much of a thing anymore, is it? I mean, there's other solutions for sharing your IGD, like GVT-D. I may have cut teeth on the sandy and ivy bridge chipsets of 2011, but anything I've run Skylake or newer doesn't have a problem sharing virtual GPU resources. All the components are also many times faster than they used to be, so I'm not really sure what the use is in tying an entire GPU up with one VM, unless you're a gamer or a masochist.

When I was reading the voodoo witchcraft stuff we all used to try because phys passthrough is so poorly documented, it really brought me back to all the hours I wasted trying to achieve one unsupported BS edge case or another, as well. You're kind of at the mercy of VMware using their software, unfortunately, and they're really only concerned about providing vGPU support for $4,000+ VDI systems - which makes sense, because they're a business who attracts customers practically made out of money. Especially now that Broadcom purchased VMware and fired like half the staff already, VMware's only innovations I've noticed recently have been based around making their products become more expensive and harder to obtain, coincidentally pissing off and alienating the user base they never acknowledge - namely, non-paying users. Soon they'll provide little more than a deep, dark chasm for IT departments to back up dumptrucks of money.

But anyway, here's about the only stuff I remember that could be helpful:

I definitely know, if you have trouble physically passing through _anything_ PCIe device (not just dGPU, but _especially_ GPU, of course), make sure in your host's advanced settings ACS checks are disabled. You could also try and blacklist the driver for the GPU so the host doesn't get it first - some people's setups still have that issue, even with a dedicated video card that the host isn't using. The most effective way to do that is with a kernel flag in the bootloader, just like Linux (because VMware is just stolen Linux). I remember actually doing kickstart flags for other things on ESXi, like setting up support for MacOS, or hosts that wouldn't boot after an update, but I've never explicitly used any flags for PCI passthrough, so I don't have any exact pearls of knowledge to oyster all over the place at the moment (other than disabling ACS checks - it's effective af).

I moved away from where I lived years ago when I wrote to this thread, and where I had my homelab I participated in these edge-case rituals with all ya'll. They're kind of fun, but there's no future unless there's community, so I promised myself I wouldn't fill my next room with servers, and I'd go all-in on the open source. I gave my big-ass servers to a friend who wanted equipment to record IP cameras, and my virtualization workloads are almost all short-lived tests in VMs and containers on laptops and workstations.

Anyway, good luck with that, but just wanted to let you know there's a whole other ecosystem out there where you're encouraged to patch software, away from that awful, soul-sucking closed ecosystem.

Oh, one last thing - I did just come across NVIDIA's supported vDGA passthrough methods (also shared vGPU resources for VDI), here's their official repo - but they might be charging for licenses, too: GitHub - NVIDIA/vgpu-device-manager: NVIDIA vGPU Device Manager manages NVIDIA vGPU devices on top of Kubernetes
 
Last edited: