Nice one. Apologies, I assumed you knew the console output didn't work on passthrough as you seemed quite technical/experienced in that area. That is standard behaviour for a passthrough device.Ok, I got it! It looks like it's been working all this time but I thought it was hanging as it was suddenly stopping outputting anything (due to i915 crash presumably?) on Proxmox's VNC console during boot. After logging in via SSH, I can see the renderer in dev/dri. ACPI must be ON, otherwise it's not working. Jellyfin transcoding works, intel_gpu_top shows ffmpeg using it.
to sum up:
- passthrough Raw 0000:00:02.0, All-Functions ON, ROM-Bar ON, PCI-Express OFF
- ACPI ON - expect Proxmox VNC to stop working, VM must be accessed via SSH
Thank you @gregrob and @JimsGarage for helping me debug it.
balloon: 0
boot: order=scsi0;ide2
cores: 4
cpu: host
hostpci0: mapping=x710-nic
hostpci1: 0000:00:02
ide2: none,media=cdrom
memory: 8000
meta: creation-qemu=8.1.5,ctime=1716265662
name: ubuntu-server
numa: 0
ostype: l26
scsi0: local-lvm:vm-904-disk-0,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: xxxxxxx
sockets: 1
vmgenid: xxxxxxx
I'm aware that output on GPU does not work on the host (i.e. Proxmox) when GPU is passed through. The problem was that VM's output to virtualised VNC display in Proxmox's web GUI stopped working.Nice one. Apologies, I assumed you knew the console output didn't work on passthrough as you seemed quite technical/experienced in that area. That is standard behaviour for a passthrough device.
I've confirmed that lspci -nnk shows the i915 driver in use.
not in the uk. its a ripp offHey @dvdplm, I have exactly the same issue with the same hardware. When you move back in this thread you can find the advice to tape few pins in the pcie card. I tried almost everything, disabling other pcie ports, removing disks, tapeing every combination of pins, but it still doesn't work. If you find a solution please post it here. Today I flashed bios hoping it will help but no, it didn't.
I'm waiting for newer bios, this is only hope.
Btw, this is my second MS-01, I sent back the first one because of this issue, I thought I got broken unit.
Stay in touch, I hope they will finally fix it.

That memory worked, but I just got another MS-01 (i9-13900H like the other one) and another Crucial 96GB kit and one of the DIMMs is bad in the new kit. Has Crucial quality gone downhill lately or is the MS-01 just picky about RAM? Does anyone have a non-Crucial 96GB kit they've used (preferably from Amazon)?The new memory arrived today and both DIMMs work. I'm running a RAM test now.
The MS-01 has only one PCIE expansion slot, physically x16, electrically PCIE 4.0 x8.I am debating what to use the x16 slots for, I was thinking maybe have a GPU in one and then a couple with a PCI-E card that allows two additional 22110 M.2s for Ceph in the others.
Do you get the video output from the VM on the HDMI port? I would like to use the MS-01 as hypervisor, with one VM used as mediacenter.Ok, I got it! It looks like it's been working all this time but I thought it was hanging as it was suddenly stopping outputting anything (due to i915 crash presumably?) on Proxmox's VNC console during boot. After logging in via SSH, I can see the renderer in dev/dri. ACPI must be ON, otherwise it's not working. Jellyfin transcoding works, intel_gpu_top shows ffmpeg using it.
I am aware of the single slot. I have 5, so I am saying I can throw a GPU in a couple and pin pods that need the GPU to those worker nodes and then use the rest to expand my storage a bit. Just looking for recommendations. Sorry if that wasn't clear. I will look into cards with a switch chip, I think gen 3 is fine since I'm throwing Samsung PM983a's into it just need one that fits 22110.The MS-01 has only one PCIE expansion slot, physically x16, electrically PCIE 4.0 x8.
The picture on their website is somewhat misleading, but the reviews show it clearly.
You can’t bifurcate this slot, so if you want to use more than one additional NVME you will need a PCIE card with PCIE switch chip. Most of those cards are limited to PCIE Gen. 3.
Oh so I missed that. Sorry, English isn’t my mother tongue.I am aware of the single slot. I have 5, so I am saying I can throw a GPU in a couple and pin pods that need the GPU to those worker nodes and then use the rest to expand my storage a bit. Just looking for recommendations. Sorry if that wasn't clear.
Qnap QM2-2P-384 perfectly fits the bill, although somewhat pricey. If you go for this card, double check to get the older version QM2-2P-384, and not QM2-2P-384A. The newer one does not fit, it is longer.I will look into cards with a switch chip, I think gen 3 is fine since I'm throwing Samsung PM983a's into it just need one that fits 22110.
Just curious, have you tested the Qnap QM2-2P-384 to see if it works with the MS-01? There were other adapters from QNAP that was not compatible (fits just fine but would not boot).Qnap QM2-2P-384 perfectly fits the bill, although somewhat pricey. If you go for this card, double check to get the older version QM2-2P-384, and not QM2-2P-384A. The newer one does not fit, it is longer.
Unfortunately no. No matter what I tried, there was no output from VM on the screen when passing GPUDo you get the video output from the VM on the HDMI port? I would like to use the MS-01 as hypervisor, with one VM used as mediacenter.
Both DIMMs on the second set worked. So 50% of them have a bad DIMM, or 25% of the total DIMMs I've tried.That memory worked, but I just got another MS-01 (i9-13900H like the other one) and another Crucial 96GB kit and one of the DIMMs is bad in the new kit. Has Crucial quality gone downhill lately or is the MS-01 just picky about RAM? Does anyone have a non-Crucial 96GB kit they've used (preferably from Amazon)?
Thanks for the recommendation! That is a lovely card but agree on it being pricy! I'll keep an eye on this thread to see if anyone tests one. Tons more 'A' variants on ebay right now so good call on the difference in length.Unfortunately no. No matter what I tried, there was no output from VM on the screen when passing GPU
I would love to use this too!Give me a little more time and I will try to redesign it to make it scalable, which takes a lot of time. The case is 2U as it is design to go big or go home with the fan (120 mm). The one I posted above fits perfectly in 10" rack. To make it completely scalable going to 19" rack with one or two ms01 side-by-side I would need to give a choice for different use cases. I will let you guys know when it's done
![]()
I installed 8.0.2 on both of my MS-01s and set the options but didn't disable the E-cores or set CPU affinity. I migrated all the VMs to the MS-01s and they've been running for a couple of days now without any issues.I‘ve tested it and it runs without problems, but i switched to proxmox, because i do not want to disable any cores or thinking about core affinity.
![]()
ESXi on Minisforum MS-01
In recent years, there have been a number of new players that have entered the mini PC market that have really been pushing the boundaries on small form factor systems. Minisforum is one such compa…williamlam.com
![]()
Experimenting with ESXi CPU affinity and Intel Hybrid CPU Cores
After debugging a recent issue with using VMware Workstation and Intel Hybrid CPUs, it gave me an idea about an experiment to try with ESXi and Intel Hybrid CPUs. As a refresher, starting with the …williamlam.com