Windows VM!!!!!!!! ohhh great news, can you share the rom file pls?Phew, finally got HDMI output working on Windows with full iGPU passtrough.
Turns out I needed the GOP rom straight from the bios file, instead of the 12-13th gen generic one provided over at: GitHub - gangqizai/igd: Intel 核显直通 rom / Intel Integrated GPU passrough rom file for PVE (all other instructions apply, machine type i440fx etc.). The GOP can be extracted from the 1.22 bios file using MMTOOL (look for the option rom matching the iGPU PCI ID)
Hi there,Guys anyone successfully managed to passthrough one of the X710 in proxmox to a VM? i need one of the two but whatever i do it passes both of them.
I also tried the SR-IOV thing.. i set the number to 4 and i try to pass one of this new cards that appeared to a VM, it seems ok but the VM doesnt have internet...
Thank you for testing. I am about to get the same setup soon and I wonder if there are any tricks you need or guide you follow to make this work please?@jro77 @Pikeman1868 I just tested my MS-01 (1.22 BIOS) again with Sparkle Intel Arc A310 Eco and it posts.
Here islspci
output from Proxmox
03:00.0 VGA compatible controller [0300]: Intel Corporation DG2 [Arc A310] [8086:56a6] (rev 05)Bash:00:00.0 Host bridge [0600]: Intel Corporation Device [8086:a706] 00:01.0 PCI bridge [0604]: Intel Corporation Device [8086:a70d] 00:06.0 PCI bridge [0604]: Intel Corporation Raptor Lake PCIe 4.0 Graphics Port [8086:a74d] 00:06.2 PCI bridge [0604]: Intel Corporation Device [8086:a73d] 00:07.0 PCI bridge [0604]: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port [8086:a76e] 00:07.2 PCI bridge [0604]: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port [8086:a72f] 00:0d.0 USB controller [0c03]: Intel Corporation Raptor Lake-P Thunderbolt 4 USB Controller [8086:a71e] 00:0d.2 USB controller [0c03]: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI [8086:a73e] 00:0d.3 USB controller [0c03]: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI [8086:a76d] 00:14.0 USB controller [0c03]: Intel Corporation Alder Lake PCH USB 3.2 xHCI Host Controller [8086:51ed] (rev 01) 00:14.2 RAM memory [0500]: Intel Corporation Alder Lake PCH Shared SRAM [8086:51ef] (rev 01) 00:16.0 Communication controller [0780]: Intel Corporation Alder Lake PCH HECI Controller [8086:51e0] (rev 01) 00:16.3 Serial controller [0700]: Intel Corporation Alder Lake AMT SOL Redirection [8086:51e3] (rev 01) 00:1c.0 PCI bridge [0604]: Intel Corporation Alder Lake-P PCH PCIe Root Port [8086:51bb] (rev 01) 00:1c.4 PCI bridge [0604]: Intel Corporation Device [8086:51bc] (rev 01) 00:1d.0 PCI bridge [0604]: Intel Corporation Alder Lake PCI Express Root Port [8086:51b0] (rev 01) 00:1d.2 PCI bridge [0604]: Intel Corporation Device [8086:51b2] (rev 01) 00:1d.3 PCI bridge [0604]: Intel Corporation Device [8086:51b3] (rev 01) 00:1f.0 ISA bridge [0601]: Intel Corporation Raptor Lake LPC/eSPI Controller [8086:519d] (rev 01) 00:1f.3 Audio device [0403]: Intel Corporation Raptor Lake-P/U/H cAVS [8086:51ca] (rev 01) 00:1f.4 SMBus [0c05]: Intel Corporation Alder Lake PCH-P SMBus Host Controller [8086:51a3] (rev 01) 00:1f.5 Serial bus controller [0c80]: Intel Corporation Alder Lake-P PCH SPI Controller [8086:51a4] (rev 01) 01:00.0 PCI bridge [0604]: Intel Corporation Device [8086:4fa1] (rev 01) 02:01.0 PCI bridge [0604]: Intel Corporation Device [8086:4fa4] 02:04.0 PCI bridge [0604]: Intel Corporation Device [8086:4fa4] 03:00.0 VGA compatible controller [0300]: Intel Corporation DG2 [Arc A310] [8086:56a6] (rev 05) 04:00.0 Audio device [0403]: Intel Corporation DG2 Audio Controller [8086:4f92] 05:00.0 Non-Volatile memory controller [0108]: Kingston Technology Company, Inc. OM8PGP4 NVMe PCIe SSD (DRAM-less) [2646:501b] 06:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572] (rev 02) 06:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572] (rev 02) 5b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I226-V [8086:125c] (rev 04) 5c:00.0 Non-Volatile memory controller [0108]: MAXIO Technology (Hangzhou) Ltd. NVMe SSD Controller MAP1602 [1e4b:1602] (rev 01) 5d:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a] 5e:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I226-LM [8086:125b] (rev 04) 5f:00.0 Network controller [0280]: MEDIATEK Corp. MT7922 802.11ax PCI Express Wireless Network Adapter [14c3:0616]
04:00.0 Audio device [0403]: Intel Corporation DG2 Audio Controller [8086:4f92] - those are PCI devices from the card
And lshw:
Bash:root@proxmox-ms01:~# lshw -c video *-display UNCLAIMED description: VGA compatible controller product: DG2 [Arc A310] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 05 width: 64 bits clock: 33MHz capabilities: pciexpress msi pm vga_controller bus_master cap_list configuration: latency=0 resources: iomemory:400-3ff memory:6d000000-6dffffff memory:4000000000-40ffffffff memory:6e000000-6e1fffff *-graphics product: simpledrmdrmfb physical id: 4 logical name: /dev/fb0 capabilities: fb configuration: depth=32 resolution=800,600
There's a thread here on the forum all about MS01 vPro management. Short answer is the 2.5g lan port, next to the 10g sfp+ is the vPro port. The ip can be dedicated or shared with the host OS. I opted to use dedicated IPs for vPro management, separate from the Proxmox IP address. To setup the port and enable management, boot the system, press escape, click setup and go into the Intel AMT menu to configure it. I've tried a couple of different apps for management, each having something I like better than the others but I settled on MeshCommander. The remote desktop is workable but it's serial-over-lan leaves a lot to be desired.Hi. I'm considering getting an MS-01, and I hope it's not hijacking to ask this question here, which seems to be the main place on STH for MS-01 issues.
Basically: I'll be using this as a Proxmox server only, and I have no need to attach a monitor and keyboard to it. But I've never used vPro before, only IPMI-based systems, and I don't know how this works. Um, how does it work? The Minisforum page has a nice picture saying "Control via network cable", showing MeshCommander running and displaying the BIOS, but that page, and the downloadable user manual, doesn't say anything about how to actually get this set up.
And it's not obvious where to plug in a network cable for this--there's two SFP+ ports and two 2.5G ports, but there's no extra port for management.
The overall point is, I'd like to plug a network cable into it somewhere and control everything, including initial installation, over the network, and I'd like to know how to do that.
Oh, thanks. Found that thread now. So I guess I have to at least hook stuff up for the initial setup. I think I'd also use a separate IP for management (it's what I do now for IPMI on my other Proxmox server).There's a thread here on the forum all about MS01 vPro management. Short answer is the 2.5g lan port, next to the 10g sfp+ is the vPro port. The ip can be dedicated or shared with the host OS. I opted to use dedicated IPs for vPro management, separate from the Proxmox IP address. To setup the port and enable management, boot the system, press escape, click setup and go into the Intel AMT menu to configure it. I've tried a couple of different apps for management, each having something I like better than the others but I settled on MeshCommander. The remote desktop is workable but it's serial-over-lan leaves a lot to be desired.
Unfortunately, the precise time that remote management would be most useful is when the system crashes. We've all learned that when the system gets wedged, vPro management gets wedged along with it. Only cure is to physically pull the power cord and reinsert. After struggling many weeks with random Proxmox crashes, I'm finally starting to feel a bit stable with over 15 days of uptime across a 5 server cluster of MS01s. Can't say exactly what stabilized it but I think the BIOS update to v1.22 helped.Oh, thanks. Found that thread now. So I guess I have to at least hook stuff up for the initial setup. I think I'd also use a separate IP for management (it's what I do now for IPMI on my other Proxmox server).
I don't plan on using this for any extensive management, just to do basic stuff if things crash.
Huh, this is upsetting to hear. Is this a Proxmox issue, or an MS-01 issue?Unfortunately, the precise time that remote management would be most useful is when the system crashes. We've all learned that when the system gets wedged, vPro management gets wedged along with it. Only cure is to physically pull the power cord and reinsert. After struggling many weeks with random Proxmox crashes, I'm finally starting to feel a bit stable with over 15 days of uptime across a 5 server cluster of MS01s. Can't say exactly what stabilized it but I think the BIOS update to v1.22 helped.
I too measure my uptime in years. My previous Proxmox cluster never crashed, not a single node, not once for any reason. My previous cluster nodes are old 2U Xeon boxes, cheap, loud, power hungy and hot but stable as can be. In fact, the MS01 crashes have been the first I've seen on Proxmox. Very upsetting indeed, especially considering how much I've spent getting this cluster up and running. Me personally, I think it's the i9 processor. I've been following reports of gamers claiming that the i9 is randomly crashing on specific games. Maybe the Proxmox crashes are for the same reason. It's not just me seeing the Proxmox crashes, a lot of users in various threads reporting the same. I've been keep a real keen eye on everything I can to try and track down exactly what is happening to cause the crashes. Whatever it is, I hope they find it and fix it soon. They being Intel, kernel devs, proxmox devs, MS01 devs or whomever. Don't really care who does it, but it needs to be fixed.Huh, this is upsetting to hear. Is this a Proxmox issue, or an MS-01 issue?
I plan to use this to run servers on Proxmox, and I'm used to servers having uptime measured in years. The idea of celebrating a 15-day uptime cycle is concerning.
I can confirm that the Dual Edge E-key form factor does not work in the wifi slot -- it fits but doesn't enumerate. I'm guessing the slot has only 1 pci-e lane, which the product page warns will not work. I haven't tried the A+E key variant.@JaxJiang
Can you address if a coral will work in the wifi slot? And how many lanes the wifi slot has, if it will support the dual coral?
Answering my own question, ` lspci | grep Non-Volatile` on the proxmox shell showed me all three disks. Even though I have same exact make/brand disks they are using two different controllers looks like. I was able to expose the dis natively to OMV using these pcie IDs.hello all, anyone know if the individual M.2 NVME disk ports are available to be passed through to a QEMU VM in Proxmox? As of now I don't see them in the Add PCI Device screen in proxmox 8.2.2 (latest).
I got it to work with an M2 A+E to B+M riser and the Dual Edge TPU Adapter:I can confirm that the Dual Edge E-key form factor does not work in the wifi slot -- it fits but doesn't enumerate. I'm guessing the slot has only 1 pci-e lane, which the product page warns will not work. I haven't tried the A+E key variant.
5d:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
5e:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU