Proxmox iGPU passthrough?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Has anyone successfully passed through their iGPU to a VM in Proxmox? Not finding much info on this.
 

thefarelkid

New Member
Sep 17, 2020
3
0
1
I just did this on my NUC about 2 weekends ago. I was glad I found this blog post to do it. I didn't end up going all the way to a docker container with it, but it is definately being picked up by my Ubuntu VM.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I just did this on my NUC about 2 weekends ago. I was glad I found this blog post to do it. I didn't end up going all the way to a docker container with it, but it is definately being picked up by my Ubuntu VM.
I found that post too but it unfortunately didn't work for my setup (Intel Xeon E-2246G and SuperMicro X11-SCH-F). This is what my lspci output looks like.

Code:
root@athens:~# lspci
00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller (x16) (rev 07)
00:02.0 Display controller: Intel Corporation Device 3e96
00:08.0 System peripheral: Intel Corporation Skylake Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller (rev 10)
00:15.1 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller (rev 10)
00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
00:16.1 Communication controller: Intel Corporation Device a361 (rev 10)
00:16.4 Communication controller: Intel Corporation Device a364 (rev 10)
00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)
00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0)
00:1b.4 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0)
00:1b.5 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0)
00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0)
00:1c.1 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0)
00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port (rev f0)
00:1e.0 Communication controller: Intel Corporation Device a328 (rev 10)
00:1f.0 ISA bridge: Intel Corporation Device a309 (rev 10)
00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
01:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]
03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
06:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
07:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
08:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961

Are you running your Proxmox off a ZFS drive/mirror?

When I try to add 00:02.0 Display controller: Intel Corporation Device 3e96 to the VM I get this:

Code:
kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:00:02.0: Failed to set up TRIGGER eventfd signaling for interrupt INTX-0: VFIO_DEVICE_SET_IRQS failure: Transport endpoint is not connected
TASK ERROR: start failed: QEMU exited with code 1
 
Last edited:

thefarelkid

New Member
Sep 17, 2020
3
0
1
I'm sorry that didn't help you. I must have just really lucked out on finding a guide from someone who has my exact hardware. I'm not running ZFS on Proxmox. Best of luck!
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
Didi you ever figure this out? I'm having my own passthrough adventure and saw this while I searched for my own answers.

May be a dumb question, but if Proxmox uses IOMMU/VFIO for passthrough, I think you need to isolate the entire IOMMU group that the GPU belongs in for passthrough to work?

I believe the Transfer Endpoint error is a network/data transfer error. So, if you're getting that when you try to isolate the iGPU, it could be that the iGPU shares an IOMMU Group with a network/data transfer device (NIC, disk controller etc.).

Did you check your IOMMU group assignments to see if iGPU is in a group with something else? Again, maybe a dumb question and you may have been way past that step already.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Didi you ever figure this out? I'm having my own passthrough adventure and saw this while I searched for my own answers.

May be a dumb question, but if Proxmox uses IOMMU/VFIO for passthrough, I think you need to isolate the entire IOMMU group that the GPU belongs in for passthrough to work?

I believe the Transfer Endpoint error is a network/data transfer error. So, if you're getting that when you try to isolate the iGPU, it could be that the iGPU shares an IOMMU Group with a network/data transfer device (NIC, disk controller etc.).

Did you check your IOMMU group assignments to see if iGPU is in a group with something else? Again, maybe a dumb question and you may have been way past that step already.
I had no luck. I did check the IOMMU groups and there was nothing else in the group with the Intel Display controller.
 

Markess

Well-Known Member
May 19, 2018
1,146
761
113
Northern California
Well that sucks!

I've just started tinkering with virtualization again after about 12 years. Pretty amazing what can be done now compared to then, but what really struck me is the almost infinite range of possible unique hardware combinations/interactions that need mitigation when setups stray from the plain vanilla.