Troubleshooting GPU passthrough ESXi 6.5

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

superempie

Member
Sep 25, 2015
82
10
8
The Netherlands
Yeah, 6.7u3 works for me the best also. And indeed, reserve memory.
The svga.present param is something I had mixed experiences with in the past. Sometimes worked, sometimes not. The steps I described are the ones I follow when setting up a new Win VM.
hypervisor.cpuid.v0 shouldn't be needed now Nvidia removed the restriction for it, but doesn't hurt in my opinion.

Another long shot would be if the GTX1050 is connected through DisplayPort and you run into the DisplayPort bug.
 

Docop

Member
Jul 19, 2016
41
0
6
45
I can't tell for the gtx as previous version did need some line in the vm config if i remember. I run the Quadro lineup and are all direct compatible. Even with the new proxmox not working anymore, with prox 7.1 direct install, no update, that can give you some hint perhaps. you can install on usb to try out too.
 

sev

New Member
Jul 26, 2022
8
0
1
Yeah, 6.7u3 works for me the best also. And indeed, reserve memory.
The svga.present param is something I had mixed experiences with in the past. Sometimes worked, sometimes not. The steps I described are the ones I follow when setting up a new Win VM.
hypervisor.cpuid.v0 shouldn't be needed now Nvidia removed the restriction for it, but doesn't hurt in my opinion.

Another long shot would be if the GTX1050 is connected through DisplayPort and you run into the DisplayPort bug.
I've tried with both DVI and HDMI, no difference there.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
I initially had issues getting 2x GPUs + iGPU to pass through.
  • You could try an older host BIOS. I am stuck on an old BIOS for my Supermicro X11-SCA-F, as newer ones won't even boot with the 3 GPUs.
  • The order that the GPUs are set in BIOS made a difference for me: Load Optimised Defaults, Primary Display = PCI, Primary PEG = Slot 4, Primary PCI = Onboard, Internal Graphics = Enabled, Option ROM Video = UEFI.
  • When my primary GPU stops working, I always need to fully power off the host and do a cold boot; resetting the host doesn't work.
  • Toggling the primary GPU for passthrough in ESXi 7 was a nightmare, as toggling 1 of its 4 devices, sets the others off in a random active/disabled flicker, so after a lot of persistence I got all 4 active, as without all 4, I couldn't use the GPU in a VM; however, I only passthrough 2 devices (video & audio).
  • Add all of the GPU's devices into /etc/vmware/passthru.map
  • I went through a few headless ghost display emulator adapters in HDMI and DP until one worked.
  • I installed VNC before GPUs
 
  • Like
Reactions: sev

sev

New Member
Jul 26, 2022
8
0
1
I initially had issues getting 2x GPUs + iGPU to pass through.
  • You could try an older host BIOS. I am stuck on an old BIOS for my Supermicro X11-SCA-F, as newer ones won't even boot with the 3 GPUs.
  • The order that the GPUs are set in BIOS made a difference for me: Load Optimised Defaults, Primary Display = PCI, Primary PEG = Slot 4, Primary PCI = Onboard, Internal Graphics = Enabled, Option ROM Video = UEFI.
  • When my primary GPU stops working, I always need to fully power off the host and do a cold boot; resetting the host doesn't work.
  • Toggling the primary GPU for passthrough in ESXi 7 was a nightmare, as toggling 1 of its 4 devices, sets the others off in a random active/disabled flicker, so after a lot of persistence I got all 4 active, as without all 4, I couldn't use the GPU in a VM; however, I only passthrough 2 devices (video & audio).
  • Add all of the GPU's devices into /etc/vmware/passthru.map
  • I went through a few headless ghost display emulator adapters in HDMI and DP until one worked.
  • I installed VNC before GPUs

So i edited the passthrough map and set the device ID in there along with the vendor ID and set it to D3d0. Following your advice, I did a complete shutdown and cold booted the machine, when it came back up, and I started the VM with the pass through, much to my surprise, I saw the windows logon screen on the monitor the GPU was attached to!

unfortunately, in pretty quick order, the monitor blue screened and I rebooteed/cold booted, tried a bunch of stuff and couldnt get it to work again.


So i'm using a Dell Precision 3620 and this machine is a Intel I7-6700 with a intel c236 chipset, I have updated the bios to the latest. I might try downgrading but I cannot set the slot the graphics card comes up on. Just which one is primary.
 
Dec 3, 2020
45
12
8
Hi all,

I am new to this topic and I would like to get my iGPU of a AMD Ryzen 7 Pro 4750G working in passthrough mode. I am using this CPU in a ASRock Rack x570D4U-2L2T mainboard.

IOMMU is activated in the BIOS and this is the result:

1689958986774.png

I have added all 7 0000:30:00.x devices to a test VM, exclusively allocated the momery to this VM but the VM wont start.

1689959066126.png


When I open the .vmx file of this VM, I can see that these 7 devices have a different IOMMU group address (0000:048:00.x instead of 0000:030:00.x which is shown in the GUI).

Whats the reason? Which id should I write into passthru.map ?

Is there a way to define own IOMMU groups in ESXi so I can seperate these AMD/ATI-devices from the rest?