Minisforum MS-01 PCIe Card and RAM Compatibility Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jmoschetti45

New Member
Feb 24, 2024
5
0
1
Finally got my MS-01 running, mostly.

64GB RAM CT2K32G56C46S5
4TB 990 PRO in 4x4 slot
1TB 990 PRO in 3x4 slot
512GB WD Red SN700 in 3x2 slot
Wifi card removed

This configuration works fine running Debian 12. Once I pop in my Radeon PRO W6400 in, I'm hung at a black screen on boot. No logo no nothing. Ethernet port has LEDs on. No keyboard response. Taking the card out still leaves me with the black screen. I have to clear the cmos by removing the battery for a few min. Then it boots back up fine. I've tried it a few times, same problem almost everytime. Once in a great while, it'll actually boot with the card in, but I'd say its 1 in 20 or worse.

Edit: Updated to latest BIOS already

Any ideas?
 
Last edited:

MrNova

New Member
Dec 10, 2024
3
0
1
I've been bashing my head against the wall trying to figure out some issues I am seeing with LACP bonding the 10g NICs and wondering if anyone has come across anything similar. I have 4 MS01s, two i9-13900H (v1.22 bios, 96gb RAM, two NVME drives each) and two i5-12600H (v1.26 bios, 64gb RAM, two NVME drives each). Three of these are running in a Proxmox ceph cluster, and the 4th i5 lives in a separate three node ceph cluster. These clusters are running in separate networks/locations. The 10gig ports are all connected via DACs to USW Aggregation switches, with LACP bonding and layer 3+4 hashing policies enabled in Proxmox and on the switch. Links are being reported as 20gig. Everything looks great, except I have been unable to get >10gig speeds even when using multiple clients in iperf and when benchmarking ceph despite going through pages and pages of forum and reddit posts trying every config tweak I've come across. Using Proxmox 6.8.12 and Ceph 18.2.4 in both clusters.

What's really driving me crazy here is in the second cluster that has two non MS01 nodes, I AM able to see the expected increased speed. Ceph sequential reads are clocking in at ~15gig and I can get nearly 20gig when running multiple iperf clients. But here is the kicker; I can only see the speedup in iperf if one of the NON MS01 nodes is the server. As soon as I try using the MS01 as the server, I drop back down to 10gig total across multiple clients. The only difference I've spotted so far is that the MS01's are using the i40e NIC driver and the others are using ixgbe.

Other than this issue, these guys have all been rock solid with reasonable temps and vpro working across the board.

EDIT: Noticing this in the logs
kernel: i40e 0000:02:00.0: PCI-Express: Speed 8.0GT/s Width x4
kernel: i40e 0000:02:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.
kernel: i40e 0000:02:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.

Possible there are insufficient PCIE lanes for both NICs running at capacity in parallel?
Self update, after a lot of examination I was able to determine that when bonded in LACP mode with a supposedly compatible switch (Unifi Aggregation, regular with 8 SFP+ ports not pro), only one of the two nics was being used for RX. This was true even when using multiple clients, IP addresses, ports, and hashing algos. This did not occur with the non MS01 nodes, and I saw this behavior for all of my MS01 nodes. I've tried updating the NIC firmware and various versions of proxmox kernel. I eventually switched the MS01s to balance-rr vs LACP and magically it all works now, seeing around 18gb of total bandwidth at times. Curious if anyone has checked with 3rd party NICs using the PCIE port if they are seeing similar behavior. I can't tell if this is an issue with the MS01's, the unifi switch, or the i40e drivers right now.

A win10 client running crystaldiskmark seq1m q8t1 benchmark is getting 2235mb/s read and 383mb/s write using the ceph storage pool (2TB samsung 990 pro's in each of the mini's)
 

anewsome

Active Member
Mar 15, 2024
130
127
43
Self update, after a lot of examination I was able to determine that when bonded in LACP mode with a supposedly compatible switch (Unifi Aggregation, regular with 8 SFP+ ports not pro), only one of the two nics was being used for RX. This was true even when using multiple clients, IP addresses, ports, and hashing algos. This did not occur with the non MS01 nodes, and I saw this behavior for all of my MS01 nodes. I've tried updating the NIC firmware and various versions of proxmox kernel. I eventually switched the MS01s to balance-rr vs LACP and magically it all works now, seeing around 18gb of total bandwidth at times. Curious if anyone has checked with 3rd party NICs using the PCIE port if they are seeing similar behavior. I can't tell if this is an issue with the MS01's, the unifi switch, or the i40e drivers right now.

A win10 client running crystaldiskmark seq1m q8t1 benchmark is getting 2235mb/s read and 383mb/s write using the ceph storage pool (2TB samsung 990 pro's in each of the mini's)
I posted about similar weirdness with MS01 LACP bonding months ago. Interpreting the responses to my post would lead one to believe that I'm doing something wrong. I feel like I understand LACP setup requirements pretty well and how it should work. But I was not seeing that with the MS01. Plus, the vPro management of the MS01 was extra flaky for me when both 2.5g ports were bonded with LACP.

Some users reported that LACP bonding and even vPro management worked fine for them.
 

myn

New Member
Nov 18, 2024
9
2
3
Anyone had any success exposing the MS-01's sensors to ESXI. When I go to Monitor | System Sensors it's giving me an error "This system has no IPMI capabilities, you may need to install a driver to enable sensor data to be retrieved." See attached screenshot.

Thanks!
 

Attachments

jmoschetti45

New Member
Feb 24, 2024
5
0
1
Can confirm Radeon PRO W6400 works if you tape pins 5&6. Does not work without tape on them.

Also, had to trim the plastic on the MS-01 case slightly to get DP connectors to fully seat.

So compatible, but not plug-in and go.
 

dirac

New Member
Dec 31, 2024
1
0
1
Hello guys,

I am grateful this thread exist, it is insightful not only for MS-01 but for general systems knowledge.

Recently, I ordered one i5-12600H MS-01 and I am looking for a general deployment for proxmox:

- I am thinking if it is possible to have a 4G/5G module instead of the wifi module, if yes, which one I can get?.

- I want to create a RAID1 with two M2 SSD but I am not sure of this will add any value as most of the data will be in my NAS that I will connect via NFS.

- I want to use Crucial's 48G 5600MHz RAMs from . However, I am not sure if they work with the i5-12600H. If no, I will simply get the Crucial 's 32G 5200MHz RAMs

Any other feedback from the i5 model? I am not sure if I should re-paste the CPU or adding an fan as a lot of people are doing. I don't think the server will be under high load.

Thanks a lot,
 

damex

Member
Apr 7, 2019
46
14
8
Hi, sorry to bring this up: I'm struggling to make it work with a T1000 8GB, it seems not to properly start.
Did you make any particular setting?

My hardware:
- 2x48 Crucial RDIMM
- 1x Lextar 790 Nvme

Thank you.
you can't have rdimms in ms01. i think that's a typo.

i have been using t1000 8gb in one of ms01 for a while with rhel9 at first and later with proxmox passed to a rhel9 vm.
i am not so sure there is anything special about that card in ms01.
been using it for transcoding and it worked well. i think igpu is more suitable for that job.

as for installation - nothing special needed to be done - just plug in card and move along with your day.
if it is an early batch of ms01 - you might need to tape 5&6 pin like person above did for w6400

on a side node - that ms01 had x710 failure few days ago and minisforum support does not reply at all.
i did send them mail with photos/videos and huge description of a problem after making sure it is definitely failed.
had to send device back to seller (at my own descretion) since it was purchased locally.
 

damex

Member
Apr 7, 2019
46
14
8
Anyone had any success exposing the MS-01's sensors to ESXI. When I go to Monitor | System Sensors it's giving me an error "This system has no IPMI capabilities, you may need to install a driver to enable sensor data to be retrieved." See attached screenshot.

Thanks!
you can't have it on ms01
 

MrNova

New Member
Dec 10, 2024
3
0
1
I posted about similar weirdness with MS01 LACP bonding months ago. Interpreting the responses to my post would lead one to believe that I'm doing something wrong. I feel like I understand LACP setup requirements pretty well and how it should work. But I was not seeing that with the MS01. Plus, the vPro management of the MS01 was extra flaky for me when both 2.5g ports were bonded with LACP.

Some users reported that LACP bonding and even vPro management worked fine for them.
I did not attempt to do bonding on the 2.5g nics, just the 10g SFP+ nics (using one 2.5g for vPro and the other for non ceph vlan connectivity). I was going to try putting a spare x520 card into the pcie slot and test bonding that with the onboard SFP+ but the protrusion of the SFP+ cage prevented the nic from clearing and I can't figure out how to gracefully remove the plastic case bits without breaking something.
 

myn

New Member
Nov 18, 2024
9
2
3
The latest BIOS update (version 1.26) mentions that it updates Intel Microcode, presumably to address the instability issues and CPU degregation.

However, after applying version 1.26 and checking the microcode version, it appears to be an older version that doesn't include the fixes introduced in Microcode update 0x12B, which address these instability problems.


Is this behavior unique to my system, or are others experiencing the same? For those who haven't manually applied the microcode updates but have updated to BIOS version 1.26, what microcode version are you seeing?

root@pve:~# grep microcode /proc/cpuinfo | uniq
microcode : 0x411c
 
Last edited:
  • Like
Reactions: minisfckr-01

wadup

Active Member
Feb 13, 2024
118
91
28
The latest BIOS update (version 1.26) mentions that it updates Intel Microcode, presumably to address the instability issues and CPU degregation.

However, after applying version 1.26 and checking the microcode version, it appears to be an older version that doesn't include the fixes introduced in Microcode update 0x12B, which address these instability problems.


Is this behavior unique to my system, or are others experiencing the same? For those who haven't manually applied the microcode updates but have updated to BIOS version 1.26, what microcode version are you seeing?

root@pve:~# grep microcode /proc/cpuinfo | uniq
microcode : 0x411c
This is what I am getting:

# grep microcode /proc/cpuinfo | uniq
microcode : 0x4123

I reached out to minisforum for the latest bios before they released it and got this:

I don't know if it is any different from the version released on their website.
 
Last edited:

omavel

New Member
Oct 6, 2020
3
1
3
Regarding memory, I think is worth to mention that 96GB configuration only works on Intel i9-13900H configuration. If you order the 12th gen to save some bucks, you have the usual maximum 64GB (32GBX2) of memory RAM, at least according to intel processor datasheet.

Is this true?

There is at least 2 pieces of evidence that the 12th gen supports 96GB:
Will Minisforum ms-01 12900h support 96gb of ram.
End even in this thread: https://forums.servethehome.com/ind...compatibility-thread.42785/page-3#post-407875

Right now, I'm waiting for my MS-01 with the 12600H and wondering if I can buy 2x48GB of RAM (thinking about Crucial 96GB DDR5 RAM 5600MHz CT2K48G56C46S5)
 

Mastema

New Member
Dec 11, 2013
8
5
3
Is this true?

There is at least 2 pieces of evidence that the 12th gen supports 96GB:
Will Minisforum ms-01 12900h support 96gb of ram.
End even in this thread: https://forums.servethehome.com/ind...compatibility-thread.42785/page-3#post-407875

Right now, I'm waiting for my MS-01 with the 12600H and wondering if I can buy 2x48GB of RAM (thinking about Crucial 96GB DDR5 RAM 5600MHz CT2K48G56C46S5)
All 4 of the 12th gen MS-01's I have (1xi9 3xi5) have 96G of ram installed and see/use all 96G of RAM. All 4 are using the 96G kit you mention.
 
  • Like
Reactions: AlexHK and omavel

Arjestin

New Member
Feb 26, 2024
22
1
3
has anyone used a u2 to m2 adapter like this?


I guess with this the theoretical number of m2 drives is

2 on pcie x16
2 on m2 ports
2 on u2 to m2 adapter
1 from the wifi adapter.
I haven't personally used any U.2 to M.2 adapters in an MS-01, but have read reports that the MS-01 requires the adapter to have an integrated controller in order to split the PCIe lanes correctly. A few posts back, someone mentioned the QNAP QDA-U2MP, which does have an integrated controller, but it's physically too tall for the enclosure.
 

MonkCanatella

New Member
Feb 18, 2024
2
0
1
Sorry if this has been asked before, but has anyone had luck creating an lacp bond with the thunderbolt ports? I'm trying to create an smb multichannel link between the minisforum ms01 and my pc using the thunderbolt ports.

Also, this may be a stupid question, but I want the minisforum to kind of act as my hub that takes in an internet connection and supplies internet to my nas and my main pc. is that possible at all?