Minisforum MS-01 PCIe Card and RAM Compatibility Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

h0schi

Member
Oct 24, 2020
73
48
18
Germany
i tried to „shrink“ my Kingston DC1500M, but without success - no chance to fit it in.
Smaller bolts for reducing the height, also not working.

IMG_1262.jpeg

I ordered a PM9A3 now.
 
Last edited:

mauimauer

New Member
May 30, 2024
3
9
3

BlueChris

Active Member
Jul 18, 2021
155
56
28
53
Athens-Greece
Phew, finally got HDMI output working on Windows with full iGPU passtrough.

Turns out I needed the GOP rom straight from the bios file, instead of the 12-13th gen generic one provided over at: GitHub - gangqizai/igd: Intel 核显直通 rom / Intel Integrated GPU passrough rom file for PVE (all other instructions apply, machine type i440fx etc.). The GOP can be extracted from the 1.22 bios file using MMTOOL (look for the option rom matching the iGPU PCI ID)
Windows VM!!!!!!!! ohhh great news, can you share the rom file pls?
 

ufear

New Member
May 1, 2022
20
13
3
Utrecht, Netherlands
Mine died somewhere last week; just found it powered off; not able to turn on.

Seems to be a short between layers of the board... so not much I can do, lets see what their warranty process looks like, had 2 more on the wish-list!
 

reneil1337

New Member
Jun 10, 2024
15
15
3
reneil.eth.limo
When I started using it last week, my ms01 + unraid setup did suffer from regular crashes (black screen, not reacting to anything, only solution was hard reset) every few hours. I tried updating bios to 1.22, deactivating efficiency cores, changing c-states settings but without success. It never made it through the night and it turned out to be a thermal problem.

I finally managed to fix the crashes without having to disable efficiency cores or touching c-states at all. My homelab is located in a different room so my goal is not to silence the machine but to maximize performance while ensuring 100% stability. I'm running Bios 1.22 and did a few things to improve overall thermals which resulted in way more stability:

1) Repasted CPU with Thermal Grizzly Kryonaut after reading
https://forums.servethehome.com/ind...nd-ram-compatibility-thread.42785/post-415479

There is already a video tutorial on youtube:

2) Maxxed PL2 TDP limit in Bios to 115000 assuming that the system might not get enough juice and therefore crashes
https://forums.servethehome.com/ind...nd-ram-compatibility-thread.42785/post-415333

3) Adjusted fan curve making them spin earlier + increase overall fan speed to lower overall temperatures
https://forums.servethehome.com/ind...nd-ram-compatibility-thread.42785/post-415381

My Unraid server is now online for 2d19h without a single crash!


Best,
Reneil
 

cableslayer

New Member
Oct 22, 2022
2
2
3
Guys anyone successfully managed to passthrough one of the X710 in proxmox to a VM? i need one of the two but whatever i do it passes both of them.
I also tried the SR-IOV thing.. i set the number to 4 and i try to pass one of this new cards that appeared to a VM, it seems ok but the VM doesnt have internet...
Hi there,

The webgui sometimes is having issues to set the right deivice ID.
using the shell and lspci to get the PCIE id (e.g. 00:02.0) you'd like to passthrough, then:
qm set VMID -hostpci0 00:02.0

Hope this help.
 
  • Like
Reactions: BlueChris

rayleecs

New Member
Apr 22, 2017
1
0
1
39
@jro77 @Pikeman1868 I just tested my MS-01 (1.22 BIOS) again with Sparkle Intel Arc A310 Eco and it posts.

Here is lspci output from Proxmox

Bash:
00:00.0 Host bridge [0600]: Intel Corporation Device [8086:a706]
00:01.0 PCI bridge [0604]: Intel Corporation Device [8086:a70d]
00:06.0 PCI bridge [0604]: Intel Corporation Raptor Lake PCIe 4.0 Graphics Port [8086:a74d]
00:06.2 PCI bridge [0604]: Intel Corporation Device [8086:a73d]
00:07.0 PCI bridge [0604]: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port [8086:a76e]
00:07.2 PCI bridge [0604]: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port [8086:a72f]
00:0d.0 USB controller [0c03]: Intel Corporation Raptor Lake-P Thunderbolt 4 USB Controller [8086:a71e]
00:0d.2 USB controller [0c03]: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI [8086:a73e]
00:0d.3 USB controller [0c03]: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI [8086:a76d]
00:14.0 USB controller [0c03]: Intel Corporation Alder Lake PCH USB 3.2 xHCI Host Controller [8086:51ed] (rev 01)
00:14.2 RAM memory [0500]: Intel Corporation Alder Lake PCH Shared SRAM [8086:51ef] (rev 01)
00:16.0 Communication controller [0780]: Intel Corporation Alder Lake PCH HECI Controller [8086:51e0] (rev 01)
00:16.3 Serial controller [0700]: Intel Corporation Alder Lake AMT SOL Redirection [8086:51e3] (rev 01)
00:1c.0 PCI bridge [0604]: Intel Corporation Alder Lake-P PCH PCIe Root Port [8086:51bb] (rev 01)
00:1c.4 PCI bridge [0604]: Intel Corporation Device [8086:51bc] (rev 01)
00:1d.0 PCI bridge [0604]: Intel Corporation Alder Lake PCI Express Root Port [8086:51b0] (rev 01)
00:1d.2 PCI bridge [0604]: Intel Corporation Device [8086:51b2] (rev 01)
00:1d.3 PCI bridge [0604]: Intel Corporation Device [8086:51b3] (rev 01)
00:1f.0 ISA bridge [0601]: Intel Corporation Raptor Lake LPC/eSPI Controller [8086:519d] (rev 01)
00:1f.3 Audio device [0403]: Intel Corporation Raptor Lake-P/U/H cAVS [8086:51ca] (rev 01)
00:1f.4 SMBus [0c05]: Intel Corporation Alder Lake PCH-P SMBus Host Controller [8086:51a3] (rev 01)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Alder Lake-P PCH SPI Controller [8086:51a4] (rev 01)
01:00.0 PCI bridge [0604]: Intel Corporation Device [8086:4fa1] (rev 01)
02:01.0 PCI bridge [0604]: Intel Corporation Device [8086:4fa4]
02:04.0 PCI bridge [0604]: Intel Corporation Device [8086:4fa4]
03:00.0 VGA compatible controller [0300]: Intel Corporation DG2 [Arc A310] [8086:56a6] (rev 05)
04:00.0 Audio device [0403]: Intel Corporation DG2 Audio Controller [8086:4f92]
05:00.0 Non-Volatile memory controller [0108]: Kingston Technology Company, Inc. OM8PGP4 NVMe PCIe SSD (DRAM-less) [2646:501b]
06:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572] (rev 02)
06:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572] (rev 02)
5b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I226-V [8086:125c] (rev 04)
5c:00.0 Non-Volatile memory controller [0108]: MAXIO Technology (Hangzhou) Ltd. NVMe SSD Controller MAP1602 [1e4b:1602] (rev 01)
5d:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]
5e:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I226-LM [8086:125b] (rev 04)
5f:00.0 Network controller [0280]: MEDIATEK Corp. MT7922 802.11ax PCI Express Wireless Network Adapter [14c3:0616]
03:00.0 VGA compatible controller [0300]: Intel Corporation DG2 [Arc A310] [8086:56a6] (rev 05)
04:00.0 Audio device [0403]: Intel Corporation DG2 Audio Controller [8086:4f92] -
those are PCI devices from the card

And lshw:

Bash:
root@proxmox-ms01:~# lshw -c video
  *-display UNCLAIMED     
       description: VGA compatible controller
       product: DG2 [Arc A310]
       vendor: Intel Corporation
       physical id: 0
       bus info: pci@0000:03:00.0
       version: 05
       width: 64 bits
       clock: 33MHz
       capabilities: pciexpress msi pm vga_controller bus_master cap_list
       configuration: latency=0
       resources: iomemory:400-3ff memory:6d000000-6dffffff memory:4000000000-40ffffffff memory:6e000000-6e1fffff
  *-graphics
       product: simpledrmdrmfb
       physical id: 4
       logical name: /dev/fb0
       capabilities: fb
       configuration: depth=32 resolution=800,600
Thank you for testing. I am about to get the same setup soon and I wonder if there are any tricks you need or guide you follow to make this work please?
 

jester

New Member
May 9, 2020
25
5
3
Hi. I'm considering getting an MS-01, and I hope it's not hijacking to ask this question here, which seems to be the main place on STH for MS-01 issues.

Basically: I'll be using this as a Proxmox server only, and I have no need to attach a monitor and keyboard to it. But I've never used vPro before, only IPMI-based systems, and I don't know how this works. Um, how does it work? The Minisforum page has a nice picture saying "Control via network cable", showing MeshCommander running and displaying the BIOS, but that page, and the downloadable user manual, doesn't say anything about how to actually get this set up.

And it's not obvious where to plug in a network cable for this--there's two SFP+ ports and two 2.5G ports, but there's no extra port for management.

The overall point is, I'd like to plug a network cable into it somewhere and control everything, including initial installation, over the network, and I'd like to know how to do that.
 

anewsome

Active Member
Mar 15, 2024
125
125
43
Hi. I'm considering getting an MS-01, and I hope it's not hijacking to ask this question here, which seems to be the main place on STH for MS-01 issues.

Basically: I'll be using this as a Proxmox server only, and I have no need to attach a monitor and keyboard to it. But I've never used vPro before, only IPMI-based systems, and I don't know how this works. Um, how does it work? The Minisforum page has a nice picture saying "Control via network cable", showing MeshCommander running and displaying the BIOS, but that page, and the downloadable user manual, doesn't say anything about how to actually get this set up.

And it's not obvious where to plug in a network cable for this--there's two SFP+ ports and two 2.5G ports, but there's no extra port for management.

The overall point is, I'd like to plug a network cable into it somewhere and control everything, including initial installation, over the network, and I'd like to know how to do that.
There's a thread here on the forum all about MS01 vPro management. Short answer is the 2.5g lan port, next to the 10g sfp+ is the vPro port. The ip can be dedicated or shared with the host OS. I opted to use dedicated IPs for vPro management, separate from the Proxmox IP address. To setup the port and enable management, boot the system, press escape, click setup and go into the Intel AMT menu to configure it. I've tried a couple of different apps for management, each having something I like better than the others but I settled on MeshCommander. The remote desktop is workable but it's serial-over-lan leaves a lot to be desired.
 

jester

New Member
May 9, 2020
25
5
3
There's a thread here on the forum all about MS01 vPro management. Short answer is the 2.5g lan port, next to the 10g sfp+ is the vPro port. The ip can be dedicated or shared with the host OS. I opted to use dedicated IPs for vPro management, separate from the Proxmox IP address. To setup the port and enable management, boot the system, press escape, click setup and go into the Intel AMT menu to configure it. I've tried a couple of different apps for management, each having something I like better than the others but I settled on MeshCommander. The remote desktop is workable but it's serial-over-lan leaves a lot to be desired.
Oh, thanks. Found that thread now. So I guess I have to at least hook stuff up for the initial setup. I think I'd also use a separate IP for management (it's what I do now for IPMI on my other Proxmox server).

I don't plan on using this for any extensive management, just to do basic stuff if things crash.
 

anewsome

Active Member
Mar 15, 2024
125
125
43
Oh, thanks. Found that thread now. So I guess I have to at least hook stuff up for the initial setup. I think I'd also use a separate IP for management (it's what I do now for IPMI on my other Proxmox server).

I don't plan on using this for any extensive management, just to do basic stuff if things crash.
Unfortunately, the precise time that remote management would be most useful is when the system crashes. We've all learned that when the system gets wedged, vPro management gets wedged along with it. Only cure is to physically pull the power cord and reinsert. After struggling many weeks with random Proxmox crashes, I'm finally starting to feel a bit stable with over 15 days of uptime across a 5 server cluster of MS01s. Can't say exactly what stabilized it but I think the BIOS update to v1.22 helped.
 

jester

New Member
May 9, 2020
25
5
3
Unfortunately, the precise time that remote management would be most useful is when the system crashes. We've all learned that when the system gets wedged, vPro management gets wedged along with it. Only cure is to physically pull the power cord and reinsert. After struggling many weeks with random Proxmox crashes, I'm finally starting to feel a bit stable with over 15 days of uptime across a 5 server cluster of MS01s. Can't say exactly what stabilized it but I think the BIOS update to v1.22 helped.
Huh, this is upsetting to hear. Is this a Proxmox issue, or an MS-01 issue?

I plan to use this to run servers on Proxmox, and I'm used to servers having uptime measured in years. The idea of celebrating a 15-day uptime cycle is concerning.
 
  • Like
Reactions: minisfckr-01

anewsome

Active Member
Mar 15, 2024
125
125
43
Huh, this is upsetting to hear. Is this a Proxmox issue, or an MS-01 issue?

I plan to use this to run servers on Proxmox, and I'm used to servers having uptime measured in years. The idea of celebrating a 15-day uptime cycle is concerning.
I too measure my uptime in years. My previous Proxmox cluster never crashed, not a single node, not once for any reason. My previous cluster nodes are old 2U Xeon boxes, cheap, loud, power hungy and hot but stable as can be. In fact, the MS01 crashes have been the first I've seen on Proxmox. Very upsetting indeed, especially considering how much I've spent getting this cluster up and running. Me personally, I think it's the i9 processor. I've been following reports of gamers claiming that the i9 is randomly crashing on specific games. Maybe the Proxmox crashes are for the same reason. It's not just me seeing the Proxmox crashes, a lot of users in various threads reporting the same. I've been keep a real keen eye on everything I can to try and track down exactly what is happening to cause the crashes. Whatever it is, I hope they find it and fix it soon. They being Intel, kernel devs, proxmox devs, MS01 devs or whomever. Don't really care who does it, but it needs to be fixed.
 

millercentral

New Member
Jun 15, 2024
2
5
3
@JaxJiang
Can you address if a coral will work in the wifi slot? And how many lanes the wifi slot has, if it will support the dual coral?
I can confirm that the Dual Edge E-key form factor does not work in the wifi slot -- it fits but doesn't enumerate. I'm guessing the slot has only 1 pci-e lane, which the product page warns will not work. I haven't tried the A+E key variant.
 
  • Like
Reactions: phili76

mvadu

New Member
May 13, 2024
9
0
1
hello all, anyone know if the individual M.2 NVME disk ports are available to be passed through to a QEMU VM in Proxmox? As of now I don't see them in the Add PCI Device screen in proxmox 8.2.2 (latest).
 

mvadu

New Member
May 13, 2024
9
0
1
hello all, anyone know if the individual M.2 NVME disk ports are available to be passed through to a QEMU VM in Proxmox? As of now I don't see them in the Add PCI Device screen in proxmox 8.2.2 (latest).
Answering my own question, ` lspci | grep Non-Volatile` on the proxmox shell showed me all three disks. Even though I have same exact make/brand disks they are using two different controllers looks like. I was able to expose the dis natively to OMV using these pcie IDs.
 

daylightroberty

New Member
Jun 18, 2024
11
9
3
Hello everyone. This is my first post. Looks like I'm late to the MS-01 party here.

I've been trying to talk myself into buying one. This will be easier to do if it can somehow replace my 4x bay NAS. I already have a UM790 Pro running Proxmox so the addition of an MS-01 could be the start of a nice USB4 connected cluster.

In terms of attaching the disks, I'm looking at alternatives to LSI/HBA due to the heat/space limitation.

My question is whether an expansion card like this would work in one of the NVMe slots:

s-l1600.jpg
M.2 NVMe to Mini SAS 36 Pin SFF-8087 SATA 3.0 Expansion Card M.2 Key-m Key-b | eBay

I'm reading stuff about bifurcation here. I'm not clear on whether a SAS connector counts as 1x SAS or 4x SATA.

I'm not dead set on ZFS, so if there's a way a card like this could be made to work with EXT4 I'd be quite happy with that.

I'm thinking this could be a neat solution with a bracket like this to interface the connector to the wider cupboard:


https://www.amazon.co.uk/Cablecc-SFF-8088-SFF-8087-Adapter-Bracket/dp/B06ZZDLJH6

After that I'd be looking at a random internal disk cage and a molex PSU.

My use case for the the HDDs is nightly Duplicati backups of working files and PBS, a small Plex/Jellyfin library, and hosting offsite backups for a couple of friends. I currently have my 4x4TB disks configured in RAID5 which I'm comfortable with, if a disk goes pop I'll copy the data to a new volume.

Apart from that I am running self hosted dev environments and fast network storage for music production. My current setup is fine but the NAS has some quirks that bother me and I would rather abstract my storage from the device.

Thanks
 

mauimauer

New Member
May 30, 2024
3
9
3
I can confirm that the Dual Edge E-key form factor does not work in the wifi slot -- it fits but doesn't enumerate. I'm guessing the slot has only 1 pci-e lane, which the product page warns will not work. I haven't tried the A+E key variant.
I got it to work with an M2 A+E to B+M riser and the Dual Edge TPU Adapter:

It's a bit jank, but 3d printed a bit of an adapter to prevent it from flopping around in the space that would otherwise be occupied by an U2.

Code:
5d:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
5e:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU