Minisforum MS-01 PCIe Card and RAM Compatibility Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sebo

New Member
Jan 14, 2024
21
10
3
I'll be watching too. I spent some time on it and had no luck either. Windows guest would see the iGPU, but could not use it. Ubuntu guest would hang during boot. I also tested passing through Sparkle Intel Arc A310 ECO and it worked ok (other than some occasional crashes when shutting down Windows guest. But I also did not fully blacklist i915 from the host at that time so maybe it was doing some weird things. I will test it properly soon)
 
  • Like
Reactions: markconstable

dialbat

New Member
Feb 23, 2024
16
1
3
I'll be watching too. I spent some time on it and had no luck either. Windows guest would see the iGPU, but could not use it. Ubuntu guest would hang during boot. I also tested passing through Sparkle Intel Arc A310 ECO and it worked ok (other than some occasional crashes when shutting down Windows guest. But I also did not fully blacklist i915 from the host at that time so maybe it was doing some weird things. I will test it properly soon)
Have you guys tried this?
 

JimsGarage

New Member
May 17, 2024
10
8
3
iGPU Passthrough with windows is always difficult for me, I haven't tested it. I currently pass it to Ubuntu 24.04 and it's been fine for Frigate, Plex and Jellyfin.
 

Sebo

New Member
Jan 14, 2024
21
10
3
iGPU Passthrough with windows is always difficult for me, I haven't tested it. I currently pass it to Ubuntu 24.04 and it's been fine for Frigate, Plex and Jellyfin.
Could you detail how did you make it work with Ubuntu guest? That’s my main goal too. I tested Windows guest to only see if it works, as Ubuntu guest wouldn’t boot at all
 
  • Like
Reactions: markconstable

anewsome

Active Member
Mar 15, 2024
130
127
43
I'll begin by asking what all fun things you do with your three cluster rig? :cool:
I have 5x MS01 in a Proxmox cluster. Running about 40 "permanent" always ON VMs. 10 always on LXC containers. Storage is 15x 2TB Crucial T500 (3 per MS01). Ceph is running public and backend networking on bond1, which is 2x 10gb on each node. No transceivers, using direct cables from each MS01 to the 10g switch. Bond0 is 2x 2.5g on each MS01. Both bond0 and bond1are LACP.

Of the 40 VMs running, really just a mix of everything I need. DHCP, DNS, active directory, a Kubernetes worker on each node, Oracle databases, nested ESXi and vCenter, Proxmox Backup Server, pfSense, Plex, iVentoy (PXE boot version of Ventoy), AzureDevops (git repository), MeshCommander, MinIO, Kasm, a lot of stuff. Kubernetes hosts a bunch of databases, media manager, Guacamole, FreshRSS and probably 50 other deployments in total.

I have them mounted vertically in 6U of a super shallow depth rack - 12 inches deep. The front of the rack has 6U of fans pushing air directly into the front of the MS01s. To get most air through them, I designed and 3D printed a custom carrier for the MS01. The MS01 slide into place and lock with a latch at the back of the rack.

The cluster also has 5x i7 Intel NUCs, not because I designed it that way, but because I happen to have them already. Currently the GPUs are Thunderbolt attached to the i7 NUCs and using PCI passthrough for the VMs running Ollama, Open-Webui, Stable Diffusion, ComfyUI, Blender rendering and PiperTTS voice training - mostly.

The rack will get a lot cleaner once I design the mounts the NUCs, or just get rid of them and finish the rack with more MS01s. The single 1gb NIC on the NUCs really limits their usefulness. I also need to design/make the mounts for that growing collection of power bricks under the switch. They. aren't getting any cooling currently but the MS01 and NUCs are icy cold in normal operation.

IMG_0375-810x1080.jpegIMG_0372-1440x1080.jpeg
 

GreenAvacado

Active Member
Sep 25, 2022
176
84
28
I have 5x MS01 in a Proxmox cluster. Running about 40 "permanent" always ON VMs. 10 always on LXC containers. Storage is 15x 2TB Crucial T500 (3 per MS01). Ceph is running public and backend networking on bond1, which is 2x 10gb on each node. No transceivers, using direct cables from each MS01 to the 10g switch. Bond0 is 2x 2.5g on each MS01. Both bond0 and bond1are LACP.

Of the 40 VMs running, really just a mix of everything I need. DHCP, DNS, active directory, a Kubernetes worker on each node, Oracle databases, nested ESXi and vCenter, Proxmox Backup Server, pfSense, Plex, iVentoy (PXE boot version of Ventoy), AzureDevops (git repository), MeshCommander, MinIO, Kasm, a lot of stuff. Kubernetes hosts a bunch of databases, media manager, Guacamole, FreshRSS and probably 50 other deployments in total.

I have them mounted vertically in 6U of a super shallow depth rack - 12 inches deep. The front of the rack has 6U of fans pushing air directly into the front of the MS01s. To get most air through them, I designed and 3D printed a custom carrier for the MS01. The MS01 slide into place and lock with a latch at the back of the rack.

The cluster also has 5x i7 Intel NUCs, not because I designed it that way, but because I happen to have them already. Currently the GPUs are Thunderbolt attached to the i7 NUCs and using PCI passthrough for the VMs running Ollama, Open-Webui, Stable Diffusion, ComfyUI, Blender rendering and PiperTTS voice training - mostly.

The rack will get a lot cleaner once I design the mounts the NUCs, or just get rid of them and finish the rack with more MS01s. The single 1gb NIC on the NUCs really limits their usefulness. I also need to design/make the mounts for that growing collection of power bricks under the switch. They. aren't getting any cooling currently but the MS01 and NUCs are icy cold in normal operation.

View attachment 36782View attachment 36783
Damn, love it. Impressive rig. I hope you're not sitting anywhere close to this monster of a setup :cool:

I do wonder how much your electric bill runs each month ;)
 

anewsome

Active Member
Mar 15, 2024
130
127
43
Damn, love it. Impressive rig. I hope you're not sitting anywhere close to this monster of a setup :cool:

I do wonder how much your electric bill runs each month ;)
I haven't got my first electric bill yet, but I can guarantee it'll be less than the hot & noisy 12U of Xeons it replaced. This tiny rack is cool and quiet enough to sit inches from my keyboard and monitor. It's not even waist high and I'm in meetings all day and it doesn't bother me a bit. The thermal performance has been better than I expected, rack fans are on speed 4 (1-5 range). I feel like I can knock down the speed to 3 and still be OK. I can also replace the cheap fans with better/quieter ones, which I plan to do someday. Memory is a lot tighter in this rack compared to the 3TB of RAM in the Xeons rack, but I have no regrets. This lil rack rocks.
 

anewsome

Active Member
Mar 15, 2024
130
127
43
How come you've gone for such high performance storage for each slot? I bought one of those for the fastest SSD slot but put a slightly cheaper P3 in the second to save a bit of cash.
There's 20 NVME drives + spares for replacements when they go bad or wear out. That's just in this rack, not including the NVME drives in the laptops and other systems. I find it easier to manage the spares if everything is the same. Recently did the same with spinning disks too. 20 spinning disks, all swapped to be the same in every drive slot, or actually that's still in progress but nearly done. Same goes there, a spare is a spare is a spare. No need to keep multiple models of spares on hand. I keep telling myself these are the last hard drives I'll ever buy, but ISOs, movies, tvshows, music, backups, all just too big to keep on flash - so spinning disk it is.

I'm aware that using the Crucial in the slowest slot on MS01 is kind of a waste of a rather speedy drive in a slow slot. But it's still acceptable speed. It was a last minute decision to go with consumer drives at all. My original plan was to go with enterprise NVME across the board, but that just wasn't in the cards (or the wallet). These NVME stuffed inside the MS01 kinda make me nervous anyway. Really not looking forward to downing an entire node and replacing one when they go bad. What I'd really like is to get all the NVME to be hot swappable ;-)
 

alphasite

New Member
Oct 11, 2021
17
16
3
My MS-01 (with i9-13900H CPU) arrived Sunday and my 96GB Crucial kit arrived today. One of the 48GB DIMMs is defective. I tried each one in both slots and one of them works in either slot and the other doesn't. If I have them both installed I get a blank screen. I only get an option from Amazon to return the memory, they won't send a replacement so I've ordered another kit which I'll have tomorrow.

Hopefully both of the new ones work. I supposed if one of the new ones is defective I could use the good one from each and return the two bad DIMMs. There's no serial number on the package.
 

JimsGarage

New Member
May 17, 2024
10
8
3
That does not work on my MS-01 and seems incomplete compared to other igpu passthrough tutorials I've seen. "You may need to add a ROM BAR" is a huge rabbit hole unto itself. I'm not aware of anyone reporting successfully getting Xe iGPU passthrough to work on a MS-01.
Interesting, it's working for me on all 3 of my nodes. Passed to a k3s cluster running 24.04.
I also have one passed to a docker VM running frigate CCTV. It's being used for object detection and acceleration.

What issues do you face?
 

GreenAvacado

Active Member
Sep 25, 2022
176
84
28
@JimsGarage glad I ran into your YT channel the other day, got the Authentik going. Nice to see you here :cool:

Speaking of MS-01, did you by any chance tried to do PCIe passthrough on USB4 port controller? For some reason and other have reported this too, it does not work. I'm hoping to only do data over it but I think fact that it can also do video could be the reason passthough can not recognize USB devices connected to it.
 

alphasite

New Member
Oct 11, 2021
17
16
3
My MS-01 (with i9-13900H CPU) arrived Sunday and my 96GB Crucial kit arrived today. One of the 48GB DIMMs is defective. I tried each one in both slots and one of them works in either slot and the other doesn't. If I have them both installed I get a blank screen. I only get an option from Amazon to return the memory, they won't send a replacement so I've ordered another kit which I'll have tomorrow.

Hopefully both of the new ones work. I supposed if one of the new ones is defective I could use the good one from each and return the two bad DIMMs. There's no serial number on the package.
The new memory arrived today and both DIMMs work. I'm running a RAM test now.
 

alphasite

New Member
Oct 11, 2021
17
16
3
I have VMUG so am thinking of running ESXi 8 on the MS-01 instead of Proxmox since I'm more familiar with ESXi. Is anyone running ESXi with just the two options cpuUniformityHardCheckPanic=FALSE and ignoreMsrFaults=TRUE and leaving all the cores enabled? If so, do you have any issues?