Troubleshooting GPU passthrough ESXi 6.5

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Vi1i

New Member
Mar 28, 2018
5
0
1
30
Also try testing it in esxi first to make sure it's actually working without pass through. If good, then set it back to pass through and test with a new VM without the vid card. These tests will help with troubleshooting.
So after I disable the passthrough, what is the best way to test it in ESXi? Should I use ssh and run lscpi ans lsusb to see if it exists?
 

Vi1i

New Member
Mar 28, 2018
5
0
1
30
So after I disable the passthrough, what is the best way to test it in ESXi? Should I use ssh and run lscpi ans lsusb to see if it exists?
So I was just an idiot, my power connection on the card fell out.... It was working properly.....
Thanks for the help!
 

dastrix

New Member
Jun 5, 2018
3
0
1
Yes, I have stuck successfully to AMD over the years and even then it takes a while to get a "good" one.

Probably a little off topic, but interesting all the same. I just installed ESXi 6.5 on one of my Intel NUC Skull Canyon's. Had a small problem with not being able to see my Bluetooth adapter which was resolved by disabling the new (and buggy?) VMWare USB driver. I then noticed that I was able to pass through the on-board Intel Iris Pro 580 GPU to one of my Windows 10 64bit VM's and that is when things became a little interesting. The latest Intel Win10 Graphics driver installed without any problems but unfortunately I could not get an image on any monitor that I connected to any port. I knew the Graphics driver was doing its job because I made use of Intel's GPU assisted Quick sync to do a few H264 encodes using Handbrake which all performed extremely well. It would be great to have on-board Pass-through finally working on a NUC. Does anybody have any ideas on how I might get a monitor working?
I have the same issue. QuickSync hardware acceleration works fine with a Intel HD 6400 built into my i7 Haswell.

But I cant get an image out from the onboard HDMI

Did you solve this?

Anyone weigh in on how to get this to work? Thanks!
 

Sully

New Member
Feb 26, 2018
2
0
1
42
After a few different setups and attempts I turn to this thread of now experts to help me with my last goal on my plex server via GPU passthrough of a Nvidia Quadro P2000 GPU.

Hardware:
Running ESXI 6.5.0 (build 4240417)
on a new to me:
HP Proliant dl380p gen 8 SFF
256 Gigs of RAM
Dual Xeon e5-2690 v2's
On board 10T nics
PCIE Samsung Pro NVME SSD 1TB
GPU mentioned previous, Nvidia Quadro P2000 which is a single slot 75 watt card and I feel the pound for pound champ.


I have the GPU mounted in the Top Slot#1 PCIe 3.0 X16 X16 7 Full Length, Full Height location (for those familar or whom did a quick spec sheet search)


One thing I have noticed and this would be a good place to start is that ESXI doesn't recognize the Nvidia card in the hardware section as a specific model. It's calling it nVidia Corporation VGA compatible controller and nVidia Corporation Audio Device. Most of the screen shots here when adding the GPU to VM's it specifically shows the model being added. Not sure if 6.5 u2 will fix this or if I need to install a specific vib for Nvidia Vmware driver. This should be enough to help get me pointed in the right direction.

I am able to enable the passthrough or the VGA compatible controller and the audio device. I can add them to a VM (Ubuntu 16.04 LTS), but as soon as I go to reboot after the Nvidia Driver install, I get stuck in the dreaded login screen loop. Feel free to ask for additional information, I work 2nd shift weeks nights, so I'm usually infront of a keyboard for 8+ hours a night. Thanks for taking the time.

I would also like to add that I do have a physical monitor attached to the P2000. (I know that matters in some cases)

=================UPDATE 7/17/2018 ===================
Updated to ESXI specific HPE 6.5.0 u2 version and now the Quadro P2000 shows up under hardware properly as GP106GL [Quadro P2000] (pic added)
Rebooting this evening and will report back with findings.
======================UPDATE 7/18/2018============================
Added both the GP106GL and the audio controller to the 16.04 LTS VM and installed the proprietary nvidia driver and rebooted. Once rebooted I am greeted by the login screen where I am stuck in a login boot loop.



~Sully
 

Attachments

Last edited:

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
@vinay
Also you need to make the video card primary display on the bios, else it doesnt seem to init right when you pass it to the VM. This means that as you boot esxi the screen will stop updating and look like it hung, but it should still be booting up in background. Once you power on your vm you monitor should show the vm display.
Hi, sorry for reviving the necrothread, but I was hoping you could tell me about this statement and maybe help me a bit:

I took your "video card primary display in BIOS" setting to mean the settings in the HOST's BIOS (not the VM), right? I am using a C612 with IPMI and it screwed my iKVM, but I tried it, but the VM doesn't show any graphics until the Windows lock screen.

Sometimes when I am rebooting after the VM was shutdown improperly it just has a black screen since I can't see anything until Windows is booting up, so it'll have Windows recovery/ advanced startup options / checkdisk / etc. hanging out there where I can't see it since Windows isn't booting...

Do you have any idea how to fix that?
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Hi, sorry for reviving the necrothread, but I was hoping you could tell me about this statement and maybe help me a bit:

I took your "video card primary display in BIOS" setting to mean the settings in the HOST's BIOS (not the VM), right? I am using a C612 with IPMI and it screwed my iKVM, but I tried it, but the VM doesn't show any graphics until the Windows lock screen.

Sometimes when I am rebooting after the VM was shutdown improperly it just has a black screen since I can't see anything until Windows is booting up, so it'll have Windows recovery/ advanced startup options / checkdisk / etc. hanging out there where I can't see it since Windows isn't booting...

Do you have any idea how to fix that?
yes, it was meant within the host bios. and it will mess up the iKVM.

As for your issue with rebooting, im not sure. unfort i no longer have the setup and its been a while.
 
  • Like
Reactions: AveryFreeman

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I don't think you can easily fix that:/
O/c an external monitor might show the status (or a remote workstation card), if thats an option in your case?
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I don't think you can easily fix that:/
O/c an external monitor might show the status (or a remote workstation card), if thats an option in your case?
I have an IPMI BMC so I have access to a host video card, but the VM just acts weird. I don't think GPU passthrough works very well with ESXi 6.7 - the 6.7 EFI+USB card bug I think is why I am not able to make the passthrough GPU primary, but when I tried the EFI VM I couldn't passthrough USB to control the VM. It's just a crap tradeoff and indicative of the trend of errors I am subjecting myself to.

Maybe I should just go with KVM and say f-it.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
No I meant an actual Workstation card, sth like this
Remote Workstation Card

Not sure you whether you'd need a cheapo client of it though, but total cost would be around 100 bucks used.

This card plugs in to the video output and sends that to the client device and it starts at boot time since its using the output and not a driver.
Not sure whether it would work in your setup since you'd need to use the primary output.

I use such a card for remote Gaming on a physical box (since i couldnt pass though the GTX correctly), but it might be an option for you, depending on what your actual goal is. Or maybe its totally not what you need, but wanted to mention it;)
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
I have an IPMI BMC so I have access to a host video card, but the VM just acts weird. I don't think GPU passthrough works very well with ESXi 6.7 - the 6.7 EFI+USB card bug I think is why I am not able to make the passthrough GPU primary, but when I tried the EFI VM I couldn't passthrough USB to control the VM. It's just a crap tradeoff and indicative of the trend of errors I am subjecting myself to.

Maybe I should just go with KVM and say f-it.
Are you able to go down to esxi version 6.5? might end up working better then 6.7 for what you are trying to do.
 

das1996

Member
Sep 4, 2018
75
17
8
Made some progress. Picked up a cheap msi video card from microcenter.. gt710 based.

Initially had esxi 6.7 installed. Got it to the point of initializing the hdmi video but still had issues passing the usb BT dongle. Decided to revert back to 6.5.

Installed 6.5 from scratch (actual windows vm is on a seperate ssd drive). Had to blast away the original vmx. Created a new one but rather than making a new hd, pointed it to the vmdk from before.

Now, the secret sauce. Adding the hypervisor.cpuid.v0 got me as far as initializing dvi output of the card but not hdmi.

In my search I came across this, TR-1950X + GTX 1060 Passthrough with ESXi : Amd , which references a /etc/vmware/passthru.map file.

In it I added the following lines at the bottom then rebooted the esxi host.

10de 128b d3d0 false
10de 0e0f d3d0 false

The first 2 sets of numbers come from the lspci -n output. These reference the video card and hd audio component of it.

0000:01:00.0 Class 0300: 10de:128b
0000:01:00.1 Class 0403: 10de:0e0f

The 2nd number (128b, 0e0f) are the device id's and will be unique for your particular card.

The 3rd number (d3d0) is a reset method. I have no idea what it actually refers to, but it works! This got rid of error code 43. Error 43 resulted once I set svga.present to false.

I don't have a pcie usb card. In fact, once I'm done testing, box this is going to only has 2 pcie slots. One will hold the video card, other quad port nic. Unless I can somehow force a HID device through, i'm stuck using BT (or something else).

Edit: It should be noted, onboard intel video was disabled in the bios. Re-enabling and selecting as initial init bring backs error code 43. Ideally it'd be nice to retain it so access to esxi recovery is possible. Not a complete deal breaker. One can always go into the bios to re-enable, fix esxi, then change over to pcie init.

Edit 2: No go with onboard video enabled in any capacity. Even when initial video is set to pcie, still error 43. So onboard video has to be disabled.
 
Last edited:

das1996

Member
Sep 4, 2018
75
17
8
Next challenge is figuring out how to wake the thing.

Sleep mode in windows appears to work but how do I wake it. Since mouse/keyb is BT, even though BT adapter is not set to sleep it's not waking the vm.

Next option is wol with the virtual nic. This didn't work either. I recall reading that pass through mode disabled wake/suspend. Next to try was pci pass through on the nic itself (this is the builtin mobo nic). No go here either.

Final option is wol via a pass through nic on a pcie nic card (intel quad port t340 iirc). Same result.

With both the mobo nic and quad port nic in pass through, using windows sleep function results in cpu spiking to 100% in the esxi host.

Thoughts/ideas?
 

das1996

Member
Sep 4, 2018
75
17
8
Made some more progress but also run into a block.

I've given up on trying to get onboard intel 530 video to work. I got it to the point where the passed gpu is properly recognized by windows, no error 43 or 31. Only problem, damn thing will not output anything to the actual display. It's as though the hdmi cable is not even connected. PC has display ports too, no go there either. I'm at a loss of what else to try.

Getting onboard video to work would be the ideal solution because it would allow me to install a pcie usb card which I can then pass HID devices too without exsi's road blocks. As it is, unless I find a wireless keyb/touchpad that isn't hid, a bt keyboard and mouse needs to be used. Even with that, I needed to use a generic bt dongle as the one from microsoft (which has HID profiles) does not get passed properly.

On the topic of power management, I did get that fixed in a most obscure way.

upload_2018-9-12_21-3-4.png

By choosing the option above, it lets the vm enter sleep mode properly without pegging the cpu (which happened in the 2nd option). From this state I can't shut it down properly with the esxi host. I can however reset it or power it down. However, WOL now works!!! It's a bit counter intuitive as one would think the 2nd option is more appropriate (put the guest os into......). Something to keep in mind when shutting down or rebooting the host if the vm is in sleep mode. If it's awake then the host shutdown function is there and works.

So not 100% success but 95%.
 
  • Like
Reactions: leebo_28

das1996

Member
Sep 4, 2018
75
17
8
Last step of this htpc debacle was mouse/keyboard. I really hate having a separate mouse/keyboard (bluetooth). So... Reused the ram, upgraded cpu (6600K vs 6500) and hd/ssd's from the dell optiplex 7040 sff. Picked up a Z170 based motherboard on ebay for dirt cheap ($55) msi z170a carbon gaming pro. It's a full size atx board with plenty of slots, onboard intel nic, dvi/hdmi video (not that it's of any use), lots of usb ports and one oddity.

It has a separate asmedia usb 3.1 gen 2 chip. This works very nicely to pass those 2 ports (rear, red) to the vm. I now have my logitech keyboard/touchpad back!! Nothing special required in passthru.map. I'm sure the dell guts will sell eventually.

I also bumped up the ram to 32gb from 16. Overall, the machine is drawing about 25-30 more watts. This includes the additional ram, 6tb 3.5" 5400 rpm hd (vs 2.5" 2tb 5400). Other carry over components include the quad port nic and the gt710 (with fan) video card. Using an antec earthwatts 380 watt power supply. I'm sure that dell 180watt in the sff chassis is more efficient as this antec ps (80 plus certified). I could probably save some more power by getting a more efficient ps.

Total power draw from just the 'box' (not inc other networking equipment) is around 50 watts with general internet use (firewall, pbx, ups monitor). With windows going it goes up about 10-15 more, depending on bitrate.

Small price to pay for convenience. Figure 27.5 more watts, or ~$30 more a year in electric bill, or $2.41 more a month. I can swing that. With the larger box I can now add a mail server to it and other uses.
 

Tasik008

New Member
Oct 3, 2018
6
0
1
Hi, I registered specifically to ask for help with setting up a pci passthrough on Vmvare ESXi 6.7.
MSI z370 Gaming Plus;
i5 8600K;
ASUS GTX 1050ti OC strix;
16GB RAM;
SSD Samsung 970 Pro.

The third day I try to configure pci passthrough in a virtual machine Windows 10 Enterprise. As a client I use VMware remote console. I've already looked through a lot of sites and tips, made a few changes to the configuration file of the virtual machine.
Briefly about what has been done: the video card is connected via the HDMI-VGA converter on the ESXi host, in the UEFI settings the primary load of PEG is set. The video card is defined in the web interface, the passthrough is active. When adding a video card to a virtual machine, an error appears and Expose hardware assisted virtualization to the guest ОS is disabled. The video card is detected in the guest system without errors, but the driver is not installed: the system reports that an error occurred during the installation. Attempting to patch the driver does not lead to anything, the execution causes errors, and the installer does not appear in the destination folder.
The virtual machine was created on ESXi 6.5, then the ESXi host was updated. In version 6.5, similar errors were observed.
I attach the configuration, I really hope for your support!:)
 

Attachments