[BOUNTY] ESXi GPU Passthrough [ERROR 43]

Helloworld123

New Member
May 21, 2019
4
0
1
Hello!

I am looking for someone that can fix the error 43 on my VM's.
Running on ESXi 6.0.0 U2 with specs:
Intel i7 - 7700
NVIDIA GTX 1080


Things I've tried:
- Using both Windows 10 and Windows Server 2016, both result in error 43.
- Adding the following parameters to the VM configuration:
> hypervisor.cpuid.fv0 = false
> SMBIOS.reflectHost = true
> pciPassthru0.msiEnabled = false



Not really looking for advice, but just would prefer someone that can just finally get this job done.
(As I've had too many headaches from trying to fix this myself)

Will be giving you a nice reward for your time.

Looking forward to hearing from people who have successfully fixed this issue.

Thank you.
 
Last edited:

marcoi

Well-Known Member
Apr 6, 2013
1,398
222
63
Gotha Florida
@Helloworld123 List your hardware and software versions. Also list what PCI Slot your video card is in. What steps you tried to resolve the issues? When does the issue happen? etc.
 

zir_blazer

Active Member
Dec 5, 2016
260
77
28
I can bet that you're doing passthrough of a GeForce. Sadly, I can't give you precise instructions since I use QEMU-KVM-VFIO, not VMWare ESXi. It may also depend on specific card generation and Driver version.

nVidia doesn't like for people to use their consumer GeForces under passthrough scenarios since that is a feature that they officially reserve for Quadros. They control this at a Driver level. The Driver checks if it is running in a VM environment by checking some stuff (Exposure of some specific CPUID Leafs related to the Hypervisor and Windows Hyper-V), then refuses to load. The solution is to figure out what do you have to tell the Hypervisor to hide (Assuming that ESXi can do so), or maybe using modded Drivers, which I think that there were available at some point on github.
 

m_b

New Member
Feb 26, 2017
16
8
3
39
I have a similar setup (though on ESXi 6.5) - have you added "hypervisor.cpuid.v0 = FALSE" to the .vmx file? This stops ESXi reporting to the guest OS that its running as a VM.
 

nk215

Active Member
Oct 6, 2015
316
92
28
46
Hiding the GPU from the hypervisor only works if you connect the monitor to the output of the GPU as in multihead setup. It won't work if you want to do remote graphic such as PCoIP, RPD etc.

Furthermore, the solution only works with certain GeForce cards (10xx cards work).
 

m_b

New Member
Feb 26, 2017
16
8
3
39
That's fair - if you want to run headless you'll need "modified" drivers or some kind of hardware device that emulates a connected monitor (I've seen Headless Ghost mentioned a lot on different forums).

@Helloworld123 To be helpful, we need to know a little more about your set up (e.g., Windows version, graphics card model, is there a connected monitor or are you running headless, etc.)
 

Helloworld123

New Member
May 21, 2019
4
0
1
Hello there, thank you for all your responses.

I am running the server headless from a remote dedicated server.
So I have no attached monitor, and am looking to control it through RDP.

Like you could have guessed, I am using an nvidia card, a GTX 1080 to be exact + Intel core i7 -7700 CPU.

I have tried adding the following to the .vmx file as well:
hypvisor.cpuid.v0 = false
pciPassthru0.msiEnabled = false
SMBIOS.reflectHost = true

I've tried doing this on Windows 10 and Windows server 2016.
 
Last edited:

m_b

New Member
Feb 26, 2017
16
8
3
39
It sounds like you're hitting the nVidia driver restriction - all their consumer graphics cards need to have a monitor connected (or at least one of the cards in an SLI setup). I haven't needed to get headless working for my own set-up, but quickly googling around suggests a couple options:
  1. Buy a Headless Ghost (~$20) and plug that into the card to trick the driver into thinking there's a monitor attached

  2. Follow the instructions at [GUIDE] Fix Nvidia Code 43 Issue on Nvidia GPU to create a patched driver that bypasses the restriction.
 
  • Like
Reactions: BoredSysadmin

Helloworld123

New Member
May 21, 2019
4
0
1
It sounds like you're hitting the nVidia driver restriction - all their consumer graphics cards need to have a monitor connected (or at least one of the cards in an SLI setup). I haven't needed to get headless working for my own set-up, but quickly googling around suggests a couple options:
  1. Buy a Headless Ghost (~$20) and plug that into the card to trick the driver into thinking there's a monitor attached

  2. Follow the instructions at [GUIDE] Fix Nvidia Code 43 Issue on Nvidia GPU to create a patched driver that bypasses the restriction.

Yeah, it's a dedicated server in a datacenter, so the first option won't be possible.

For the second fix, would you be able to assist me trying to bypass it this way via anyDesk?
I am unsure what to do, and if I am going to mess up because there's instructions for HyperV.

Thanks
 
Last edited:

Docop

New Member
Jul 19, 2016
16
0
1
41
with lot of mode it can run a bit, but soon you do a reboot, it don't work anymore. you have to remove then re-install/reconfig. Quadro series work just fine. And 6 much better vs 6.7 as it give some glitch.
 

besterino

New Member
Apr 22, 2017
27
7
3
43
Not true in such generslity. Got 6.7U3 working incl. reboot of VM, stable so far.

And „a lot of mod“ is just a few entries in passthru.map and advanced VM settings.
 

sophware

New Member
Nov 1, 2019
1
0
1
Not true in such generslity. Got 6.7U3 working incl. reboot of VM, stable so far.

And „a lot of mod“ is just a few entries in passthru.map and advanced VM settings.

It may not help the other person, but it would me. I have 6.7u3. I'm searching for passthru.map stuff, but would love the procedure if you just happen to have it.
 

besterino

New Member
Apr 22, 2017
27
7
3
43
Posted this in the other thread:

Thought I’d share this after only silently reading here so far: I tinkered around with my totally „unsupported“ rig (2080Ti, X399/Threadripper 1920x) the last couple of days and finally figured out how to passthrough both an „onboard chipset“ USB3 controller as well as the 2080Ti, in particular on 6.7U3. Getting this to work „once“ is not such a big problem, but it gets tricky if both devices are to „survive“ and still work after a reboot of the VM.

As some of you are aware, until now - and without the settings below - the GPU would throw the famous „error code 43“ after a reboot of the VM and only work again once the ESXi host is rebooted as a whole.

So far the most common (only?) workaround seemed to be to disable the GPU in the device manager before VM-reboot and enable it again once the VM is up again. Some automated this procedure with respective scripts.

The following worked with Windows 10 (1903) VMs started both in BIOS as well as EFI Mode and did not require any manual / scripted interventions.

The secret is:

1. Edit passthru.map and delete / comment-out the default NVIDIA setting („#10de ffff bridge false“). This general/wildcard setting for all NVIDIA devices does not work - at least not for my 2080Ti FE. Instead I needed a more granular
approach: I had to set d3do for ALL Nvidia „sub-devices“ of my graphic card EXCEPT the GPU itself. The GPU has now no override anymore and ESXi will use its defaults (which works for the GPU, but unfortunately in particular not for the USB controller...). In my case the passthru.map now looks like this:

...
#NVIDIA
#Audio
10de 10f7 d3d0 false
#Serial Bus
10de 1ad7 d3d0 false
#USB
10de 1ad6 d3d0 false
...

(Reboot the host after you made changes to the passthru.map)

2. Add ALL devices of your NVIDIA graphic card as PCI Passthrough devices to your VM, i.e. for the 2080Ti: GPU, Audio, USB and Serial Bus.

NOTE: I did not test in detail whether this is really necessary or whether it is enough to only add one or certain devices in addition to the GPU. Adding all seemed reasonable to get all devices properly resetted at reboot (was also in line with some snippets I read somewhere) and it worked. ;)

NOTE2: would be interesting to see what happens if you just delete the NVIDIA settings without adding anything else... ah... I need more time...

3. In addition to the usual hypervisor.cpuid.v0 =FALSE“, set for ALL NVIDIA passthrough devices EXCEPT the NVIDIA USB controller of the 2080Ti:

pciPassthru0.msiEnabled = FALSE

NOTE: replace the 0 (zero) after pciPassthru with the correct number of your devices respectively.

NOTE for AMD:
Last but not least: my board (X399D8A-2T) / X399 chipset / Threadripper is a bitch when it comes to ESXi, in particular USB passthrough. Didn’t work properly even with 6.5U2. For the onboard controller to survive a VM-Reboot I also had to modify the passthru.map like this:

# AMD
1022 ffff d3d0 false

Probably instead of the ffff-wildcard the specific device works as well, but I was lazy and haven’t tested further (yet)...

Works now also in 6.5U2 and 6.7U3, with either EFI/BIOS VM startup setting.
 
  • Like
Reactions: Gunnyp and sophware

Rizwan

New Member
Sep 12, 2020
1
0
1
Yeah, it's a dedicated server in a datacenter, so the first option won't be possible.

For the second fix, would you be able to assist me trying to bypass it this way via anyDesk?
I am unsure what to do, and if I am going to mess up because there's instructions for HyperV.

Thanks
Hi Sir. I have the same scenerio. I have dedicated server with GPU (Geforce GTX 1080) in a datacenter. I installed Esxi 6.7 with 2-3 VMs on it. Graphic card is not passthrough properly. Its satus is active in Passthrough. Added PCI device and foung this gtx 1080. Installed graphic card driver on VM. But still gtx 1080 is not being powerd up. Getting error. Please let me know if you still have issue or you got the solution.

Thanks