Does this mean that only 1 VM with 1 GPU attached to it will work in each esxi server? I have a 6.0u3 esxi server with 3 windows 10 VMs and 3 Nvidia GPUs passthrough to each of them, Been meaning to figure out how to enable GPU passthrough for esxi 6.7, which has not worked so far for me.Have anyone got passthrough working without setting the external gpu to primary?
I would really like to be able to enable the internal (Intel) graphics as primary display.
I got it working with a 1080 in Esxi 6.7 /6.7u1 with the following additions:
Have to disable the internal graphics and set the 1080 to primary, wont work with internal enabled as secondary.
hypervisor.cpuid.v0 = FALSE
pciPassthru0.msiEnabled = "FALSE"
SMBIOS.reflectHost = "TRUE"
Thanks! Nakivo doesn’t work without snapshots so I’ll have to look into Veam. Is it possible to automate the shutdown, backup, and startup with Veam, or is it a manual procedure?^^I use veeam. To backup, VM needs to be first shutdown. Two vm's are configured with passthrough,the firewall with a nic and windows htpc with the video card. Both back up (and restore) just fine.
# NVIDIA
# VMware default
# 10de ffff bridge false
# NVIDIA Quadro P2200
# GPU
# 10de 1c31 d3d0 false
# HD Audio
10de 10f1 d3d0 false
bcdedit /set testsigning on
)#This is the GPU
pciPassthru0.id = "00000:001:00.0"
pciPassthru0.deviceId = "0x1c31"
pciPassthru0.vendorId = "0x10de"
pciPassthru0.systemId = "5f0cc5f6-0de4-ab68-ab76-a4ae111e3c60"
pciPassthru0.present = "TRUE"
#This is the HD Audio controller
pciPassthru1.id = "00000:001:00.1"
pciPassthru1.deviceId = "0x10f1"
pciPassthru1.vendorId = "0x10de"
pciPassthru1.systemId = "5f0cc5f6-0de4-ab68-ab76-a4ae111e3c60"
pciPassthru1.present = "TRUE"
pciPassthru0.pciSlotNumber = "256"
pciPassthru1.pciSlotNumber = "1184"
hypervisor.cpuid.v0 = "FALSE"
SMBIOS.reflectHost = "TRUE"
pciPassthru0.msiEnabled = "FALSE"
pciPassthru1.msiEnabled = "FALSE"
Thought I’d share this after only silently reading here so far: I tinkered around with my totally „unsupported“ rig (2080Ti, X399/Threadripper 1920x) the last couple of days and finally figured out how to passthrough both an „onboard chipset“ USB3 controller as well as the 2080Ti, in particular on 6.7U3. Getting this to work „once“ is not such a big problem, but it gets tricky if both devices are to „survive“ and still work after a reboot of the VM.
As some of you are aware, until now - and without the settings below - the GPU would throw the famous „error code 43“ after a reboot of the VM and only work again once the ESXi host is rebooted as a whole.
So far the most common (only?) workaround seemed to be to disable the GPU in the device manager before VM-reboot and enable it again once the VM is up again. Some automated this procedure with respective scripts.
The following worked with Windows 10 (1903) VMs started both in BIOS as well as EFI Mode and did not require any manual / scripted interventions.
The secret is:
1. Edit passthru.map and delete / comment-out the default NVIDIA setting („#10de ffff bridge false“). This general/wildcard setting for all NVIDIA devices does not work - at least not for my 2080Ti FE. Instead I needed a more granular
approach: I had to set d3do for ALL Nvidia „sub-devices“ of my graphic card EXCEPT the GPU itself. The GPU has now no override anymore and ESXi will use its defaults (which works for the GPU, but unfortunately in particular not for the USB controller...). In my case the passthru.map now looks like this:
...
#NVIDIA
#Audio
10de 10f7 d3d0 false
#Serial Bus
10de 1ad7 d3d0 false
#USB
10de 1ad6 d3d0 false
...
(Reboot the host after you made changes to the passthru.map)
2. Add ALL devices of your NVIDIA graphic card as PCI Passthrough devices to your VM, i.e. for the 2080Ti: GPU, Audio, USB and Serial Bus.
NOTE: I did not test in detail whether this is really necessary or whether it is enough to only add one or certain devices in addition to the GPU. Adding all seemed reasonable to get all devices properly resetted at reboot (was also in line with some snippets I read somewhere) and it worked.
NOTE2: would be interesting to see what happens if you just delete the NVIDIA settings without adding anything else... ah... I need more time...
3. In addition to the usual hypervisor.cpuid.v0 =FALSE“, set for ALL NVIDIA passthrough devices EXCEPT the NVIDIA USB controller of the 2080Ti:
pciPassthru0.msiEnabled = FALSE
NOTE: replace the 0 (zero) after pciPassthru with the correct number of your devices respectively.
NOTE for AMD:
Last but not least: my board (X399D8A-2T) / X399 chipset / Threadripper is a bitch when it comes to ESXi, in particular USB passthrough. Didn’t work properly even with 6.5U2. For the onboard controller to survive a VM-Reboot I also had to modify the passthru.map like this:
# AMD
1022 ffff d3d0 false
Probably instead of the ffff-wildcard the specific device works as well, but I was lazy and haven’t tested further (yet)...
Works now also in 6.5U2 and 6.7U3, with either EFI/BIOS VM startup setting.
double check in ESXI host that is setup as passthrough. maybe you moved a pcie device around on the system and it changed order?Hello!
Please tell me what can be: I was able to passthrough the video card in ESXi 6.5 (GeForce GTX690), she appeared in Windows 10 (the drivers were installed, but not accepted by the video card - this is another question...).
And now I DO NOT SEE her in the list of PCI devices!
Rebooted the host several times - it did not help.
Once again: before it WAS on the list of devices, and it could be correctly passed through, but now it just disappeared!
The video card is absolutely working, power is supplied.
This helped, (host shutdown) thanks a lot !!! But what was it? Why shutdown but not reboot?double check in ESXI host that is setup as passthrough. maybe you moved a pcie device around on the system and it changed order?
Also did you try powering off the host and see if that cleared it up. Im not talking about just the VM but the whole system.
sometimes the video card gets messed up and it needs a host post to clear it up.This helped, (host shutdown) thanks a lot !!! But what was it? Why shutdown but not reboot?
For 1 - see if you can move the cards to different pcie slots, you might be drawing too much power or hitting a pcie lane limit. You will need to experiment since it isnt an exact science.Okay, but the main problems remain:
I have an Intel SR2600URBRP server (S5520UR motherboard). ESXi 6.5 is installed on it
The problem has two parts:
1) When a video card and a wi-fi PCI-E card are installed in the riser board at the same time, the video card (Nvidia GTX 690) is not visible in the list of the PCI device of the hypervisor. If you remove the wi-fi board, the card appears.
As far as we know, both devices fall into the same IOMMU group. And it seems that this problem is solved by some kind of patch...
That is, I would very much like to achieve the simultaneous operation of both devices.
2) The video card is correctly passed through to the guest OS, it is visible in the Win10 Device Manager as two devices (the card has two processors). But the drivers do not want to be picked up, and the SketchUp app launched requires hardware acceleration, that is, it does not see the external video card.
I wrote the necessary lines in the config (according to the manuals of the Internet) in order for the card to be correctly detected in the guest OS, but it did not help.