Minisforum MS-01 PCIe Card and RAM Compatibility Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

h0schi

Member
Oct 24, 2020
73
48
18
Germany
I have VMUG so am thinking of running ESXi 8 on the MS-01 instead of Proxmox since I'm more familiar with ESXi. Is anyone running ESXi with just the two options cpuUniformityHardCheckPanic=FALSE and ignoreMsrFaults=TRUE and leaving all the cores enabled? If so, do you have any issues?
I‘ve tested it and it runs without problems, but i switched to proxmox, because i do not want to disable any cores or thinking about core affinity.


 

just_a_person

New Member
Apr 18, 2024
16
7
3
I hooked up a QXP-1600eS-A1164 to it (it comes with TL-D1600S), which is a 4 port version of what comes in TL-D800S.

Works perfectly. Do note that I needed to take off the back plate to install it. Then the back plate goes back on OK.

The small fan on the PCIe card does produce an unpleasant high-pitched whine. But if you shut the unit behind a door it shouldn't be a problem.
Have you encountered any issues with your setup since setting it up? As I document in this thread, I would experience silent data corruption when using the MS-01 + TL-D1600S for an extended period of time.

 

JimsGarage

New Member
May 17, 2024
10
8
3
@JimsGarage glad I ran into your YT channel the other day, got the Authentik going. Nice to see you here :cool:

Speaking of MS-01, did you by any chance tried to do PCIe passthrough on USB4 port controller? For some reason and other have reported this too, it does not work. I'm hoping to only do data over it but I think fact that it can also do video could be the reason passthough can not recognize USB devices connected to it.
I haven't, no. I use the USB 4 ports for a ring network. The only other USB thing I do is to pass my conbee2 to a docker VM for my ZigBee mesh.
 

dialbat

New Member
Feb 23, 2024
16
1
3
I haven't got my first electric bill yet, but I can guarantee it'll be less than the hot & noisy 12U of Xeons it replaced. This tiny rack is cool and quiet enough to sit inches from my keyboard and monitor. It's not even waist high and I'm in meetings all day and it doesn't bother me a bit. The thermal performance has been better than I expected, rack fans are on speed 4 (1-5 range). I feel like I can knock down the speed to 3 and still be OK. I can also replace the cheap fans with better/quieter ones, which I plan to do someday. Memory is a lot tighter in this rack compared to the 3TB of RAM in the Xeons rack, but I have no regrets. This lil rack rocks.
Can you please share what direct spf cable you used to connect MS-01 to switch?
 

Sebo

New Member
Jan 14, 2024
21
10
3
Interesting, it's working for me on all 3 of my nodes. Passed to a k3s cluster running 24.04.
I also have one passed to a docker VM running frigate CCTV. It's being used for object detection and acceleration.

What issues do you face?
I tried your guide (just used `blacklist i915` and `blacklist xe` as GPU drivers for blacklisting) and my Ubuntu 24.04 guest hangs during with `slot initialization failed` error. I tried probably every combination of Primary GPU, All Functions, ROM-Bar and PCI Express checkboxes + alternating between SeaBIOS and OVMF UEFI. I even updated BIOS to 1.22 (my unit was shipped with 1.17) to make sure it's not something related to my unit being "old" batch.
 

JimsGarage

New Member
May 17, 2024
10
8
3
I tried your guide (just used `blacklist i915` and `blacklist xe` as GPU drivers for blacklisting) and my Ubuntu 24.04 guest hangs during with `slot initialization failed` error. I tried probably every combination of Primary GPU, All Functions, ROM-Bar and PCI Express checkboxes + alternating between SeaBIOS and OVMF UEFI. I even updated BIOS to 1.22 (my unit was shipped with 1.17) to make sure it's not something related to my unit being "old" batch.
Did you blacklist the PCIe device as well?

echo "options vfio-pci ids=10de:1381,10de:0fbc disable_vga=1" > /etc/modprobe.d/vfio.conf

Change the device id to match

Also, blacklist the drivers I have. As I said this works on all 3 of my devices. Perhaps they're a more recent revision but I doubt it. This has always worked for me no matter what the device (just not always for Windows).
 
Last edited:

Sebo

New Member
Jan 14, 2024
21
10
3
Yes I blacklisted the PCIe device. Do you blacklist anything besides Iris Xe Graphics, e.g. audio device?
I managed to solve the Ubuntu guest hanging problem when iGPU is passed to it by disabling APCI in VM options as advised here. PCI passthrough is configured as 0000:00:02.0,rombar=0. lspci -nnk shows Intel Corporation Device [8086:a7a0] but it does not load i915 driver and does not output anything on HDMI port.

Could you post your exact `vfio.conf` and VM config?
 

JimsGarage

New Member
May 17, 2024
10
8
3
Yes I blacklisted the PCIe device. Do you blacklist anything besides Iris Xe Graphics, e.g. audio device?
I managed to solve the Ubuntu guest hanging problem when iGPU is passed to it by disabling APCI in VM options as advised here. PCI passthrough is configured as 0000:00:02.0,rombar=0. lspci -nnk shows Intel Corporation Device [8086:a7a0] but it does not load i915 driver and does not output anything on HDMI port.

Could you post your exact `vfio.conf` and VM config?
I will post tomorrow. What kernel are you on? 6.2+ should be fine.
 
  • Like
Reactions: markconstable

gregrob

New Member
May 15, 2024
5
0
1
I wonder if we are talking about two different things here?

I have managed to passthrough the Xe Graphics to a VM running Ubuntu Server 24.04 on Proxmox. I use this for hardware acceleration in a Docker container running on that ubuntu server for Jellyfin and Frigate.

I could not get passthrough working for Xe Graphics and Ubuntu Desktop 24.04. It freezes up during boot and nothing is displayed on HDMI. I also couldn’t get passthrough working on Windows 11. I was plagued with error 43 :(o_O

Fingers crossed for SR-IOV and an official release in the kernel (hopefully 6.10) for XE Graphics. Might take a while to end up in Ubuntu and Proxmox. Will save all the efforts building the intel backports / strongtz repo….
 

markconstable

New Member
Oct 1, 2022
17
2
3
Interesting, it's working for me on all 3 of my nodes. Passed to a k3s cluster running 24.04.
I also have one passed to a docker VM running frigate CCTV. It's being used for object detection and acceleration.

What issues do you face?
The main one is that the i915 kernel driver crashes inside the Linux VM guest (CachyOS in my case w/ kernel 6.9 and 6.6.30 lts). I see you said you would post your exact configs tomorrow, so I will wait for that post and try again with a fresh installation of Proxmox. The two things I will change is to try Ubuntu 24.04 as the guest and disable ACPI in that guest, as I've seen suggested elsewhere. One point that occurred to me is that when I initially set up my last test, I could actually see the boot up procedure on my monitor but the output stopped when the desktop started and that's when I noticed the i915 crash in dmesg. After that initial glimpse of success, nothing I changed after that showed anything at all, so I suspect that a full shutdown of the host may be required to reset the hardware between (some) tests.
 

markconstable

New Member
Oct 1, 2022
17
2
3
I could not get passthrough working for Xe Graphics and Ubuntu Desktop 24.04. It freezes up during boot and nothing is displayed on HDMI.
I guess it locked up to the point where it wasn't running at all. If you could still get into it via SSH, then it would be interesting to know if you too were getting a i915 kernel module crash in dmesg.
 

gregrob

New Member
May 15, 2024
5
0
1
I guess it locked up to the point where it wasn't running at all. If you could still get into it via SSH, then it would be interesting to know if you too were getting a i915 kernel module crash in dmesg.
I had a quick look. Seems to be some failures with i915.
Code:
[Thu May 23 11:46:44 2024] i915 0000:00:11.0: [drm] [ENCODER:235:DDI A/PHY A] failed to retrieve link info, disabling eDP
[Thu May 23 11:46:44 2024] ------------[ cut here ]------------
[Thu May 23 11:46:44 2024] i915 0000:00:11.0: Platform does not support port C
[Thu May 23 11:46:44 2024] WARNING: CPU: 0 PID: 462 at drivers/gpu/drm/i915/display/intel_display.c:7473 assert_port_valid+0x79/0xa0 [i915]
[Thu May 23 11:46:44 2024] Modules linked in: qrtr binfmt_misc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core intel_vsec pmt_telemetry pmt_class kvm_intel snd_hda_codec_generic kvm snd_hda_intel snd_intel_dspcfg irqbypass snd_intel_sdw_acpi crct10dif_pclmul polyval_clmulni snd_hda_codec polyval_generic ghash_clmulni_intel sha256_ssse3 sha1_ssse3 snd_hda_core aesni_intel snd_hwdep crypto_simd cryptd snd_pcm snd_seq_midi i915(+) snd_seq_midi_event rapl snd_rawmidi drm_buddy snd_seq snd_seq_device snd_timer snd drm_display_helper cec rc_core soundcore i2c_algo_bit video wmi qxl drm_ttm_helper ttm vmgenid i2c_piix4 input_leds mac_hid serio_raw msr parport_pc ppdev lp parport efi_pstore nfnetlink dmi_sysfs qemu_fw_cfg ip_tables x_tables autofs4 psmouse crc32_pclmul iavf pata_acpi floppy
[Thu May 23 11:46:44 2024] CPU: 0 PID: 462 Comm: (udev-worker) Not tainted 6.8.0-31-generic #31-Ubuntu
[Thu May 23 11:46:44 2024] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
For me it is enough to spin up a LXC or VM and passthrough the iGPU to Ubuntu Server. This works for Jellyfin etc. I don't really need the desktop although it would be interesting to see it working :)
 

markconstable

New Member
Oct 1, 2022
17
2
3
I had a quick look. Seems to be some failures with i915.
Yes, I can confirm that is the same i915 kernel module crash that I am seeing. It's good to know that FULL passthrough is indeed working, up until the i915 driver is started for a desktop session.

FWIW, my lovely 17" 4K laptop died a few months ago, so I really need a daily driver desktop VM. I'd rather buy three more MS-01s than replace that laptop. ATM I am running a desktop directly on one MS-01 and using Incus to spin up a 64GB ram VM to install Proxmox into... maybe today, or never if I have success when Jim posts his iGPU passthrough guide/tips.
 

Sebo

New Member
Jan 14, 2024
21
10
3
I will post tomorrow. What kernel are you on? 6.2+ should be fine.
I'm on 6.8.4-3-pve.

@gregrob My goal is also to have it working on Ubuntu Server for transcoding. I've been testing it on Ubuntu Desktop guest to just get faster feedback, my bad. I also tested it on Ubuntu Server guest with similar results - had to disable ACPI to make it not hang on boot. It also shows up in lspci but I don't have any renderer device in /dev/dri/
 

gregrob

New Member
May 15, 2024
5
0
1
@Sebo that is strange. I am using the same version of Proxmox as you. Installed intel-gpu-tools so I could use intel_gpu_top and confirm its working. Doubt that should be the difference.

When I set grub to the minimum (even intentionally ignoring the blacklisting of the iGPU) GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt", the Ubuntu server VM to - no memory Ballooning, passthrough Raw 0000:00:02.0, All-Functions, ROM-Bar ON, processor = host - it works for me. I can even see it is working by running intel_gpu_top on the Ubuntu server VM. I get the following devices under /dev/dri:
Code:
total 0
drwxr-xr-x   3 root root        120 May 23 06:25 .
drwxr-xr-x  20 root root       4300 May 23 06:25 ..
drwxr-xr-x   2 root root        100 May 23 06:25 by-path
crw-rw----+  1 root video  226,   0 May 23 06:25 card0
crw-rw----+  1 root video  226,   1 May 23 06:25 card1
crw-rw----+  1 root render 226, 128 May 23 06:25 renderD128
Is is worth double checking your settings against mine and giving that a go?

Worst case, as a backup, a LXC container running docker can also work. This is what I have as a solution running for years on an Intel NUC. Can share the config if it helps.
 
Last edited:

Sebo

New Member
Jan 14, 2024
21
10
3
Ok, I got it! It looks like it's been working all this time but I thought it was hanging as it was suddenly stopping outputting anything (due to i915 crash presumably?) on Proxmox's VNC console during boot. After logging in via SSH, I can see the renderer in dev/dri. ACPI must be ON, otherwise it's not working. Jellyfin transcoding works, intel_gpu_top shows ffmpeg using it.

to sum up:
- passthrough Raw 0000:00:02.0, All-Functions ON, ROM-Bar ON, PCI-Express OFF
- ACPI ON - expect Proxmox VNC to stop working, VM must be accessed via SSH

Thank you @gregrob and @JimsGarage for helping me debug it.
 
  • Like
Reactions: Whatever

gregrob

New Member
May 15, 2024
5
0
1
Awesome - glad its working :). I do get the same i915 crash both Ubuntu Server and Desktop. On the Ubuntu Server VM I still have a Display with Graphics card set to default connected. Probably because of this I can still use VNC from the Proxmox web GUI to access the VM.