Minisforum MS-01 PCIe Card and RAM Compatibility Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sunmeplz

New Member
Nov 1, 2024
4
0
1
I only have about 6 days of uptime, and before that, I had to reboot 6 to 8 times. The RAM has always worked fine.

I have 2x48 GB (https://www.amazon.fr/dp/B0C79T2CP7?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1). I did not update the BIOS. I contacted Minisforum for the latest BIOS update, and they sent me the link for version 1.26 by email, but the README file indicated it was a beta version. So I preferred to try it without updating.
have you reduced memory frequency? there is a 15 page thread about the issue, and looks it is a memory issue. I really want to purchase ms01 and wanna see 96gb there...but heating and random reboots stops me atm.


May be I need to wait ms-02?:)


so, have you done something special for that? did you figure out what happened?
 

Drizzik

New Member
Oct 15, 2024
6
1
3
I did not do anything special.
Here is my RAM data, hope it helps:

firefox_iKbM9XG06m.png

As for the heating issue, I really think that adding a radiator to my M.2 drives is helpful.
I'll try to check the temperatures with a thermal camera.
 
Last edited:

Drizzik

New Member
Oct 15, 2024
6
1
3
I am used to TrueNAS, but I wanted to give Unraid a try. So, I will be using Unraid for a month.
However, I can tell you that all drives are recognized by Proxmox, Unraid, and TrueNAS
 

Sunmeplz

New Member
Nov 1, 2024
4
0
1
I am used to TrueNAS, but I wanted to give Unraid a try. So, I will be using Unraid for a month.
However, I can tell you that all drives are recognized by Proxmox, Unraid, and TrueNAS
I'm worried about the stability and overheating reports actually. If I can't see the disks yet - it's a total failure
 

voodek

New Member
May 7, 2024
4
0
1
I'm worried about the stability and overheating reports actually. If I can't see the disks yet - it's a total failure
I have three of these machines, 2x i9 13900h and 1x i5 12600h. one i9 is working as truenas baremetal with Qnap tl-d800s attached to it and it's doing great overall. One thing I don't like is the NVMEs that are getting really hot when under heavy load (3 nvme in zfs raidz1). Truenas reports temperatures arount 85 degrees C at most, but there is one additional sensors on the disks that reports over 100 degrees C (sensor 1 on lm-sensors) and that triggers SMART warning. Althouht I must mention that I have my pwm fans adjusted to be as quiet as possible.
 

BlueChris

Active Member
Jul 18, 2021
155
56
28
53
Athens-Greece
How stable is the MS-01 under a full workload with the latest BIOS version 1.26? Is it production ready now?
Not a single problem here with the 1st ever bios that the MS-01 shipped with, i am on 1.17 i think which was the 1st.

Hi, have you faced with 96gb random reboot issue? What RAM are you using? Did you have a long uptime?
Here never. The memory i have is 96GB exactly as the one in the review.

have you reduced memory frequency? there is a 15 page thread about the issue, and looks it is a memory issue. I really want to purchase ms01 and wanna see 96gb there...but heating and random reboots stops me atm.


May be I need to wait ms-02?:)


so, have you done something special for that? did you figure out what happened?
I cannot do that but i never faced a problem

I'm worried about the stability and overheating reports actually. If I can't see the disks yet - it's a total failure
This is certain. Too many disks overheat the machine in the disk side. An extra fan is a must in big installations.

I have three of these machines, 2x i9 13900h and 1x i5 12600h. one i9 is working as truenas baremetal with Qnap tl-d800s attached to it and it's doing great overall. One thing I don't like is the NVMEs that are getting really hot when under heavy load (3 nvme in zfs raidz1). Truenas reports temperatures arount 85 degrees C at most, but there is one additional sensors on the disks that reports over 100 degrees C (sensor 1 on lm-sensors) and that triggers SMART warning. Althouht I must mention that I have my pwm fans adjusted to be as quiet as possible.
This is a no-go ... you need a fan on the bottom of the machine underneath it to cool the nvme's
I have 3 NVME and extra NVME as boot in my proxmox in the wifi slot with the adapter mentioned before.
No problems what so ever for months with proxmox.
 
  • Like
Reactions: Drizzik

voodek

New Member
May 7, 2024
4
0
1
Not a single problem here with the 1st ever bios that the MS-01 shipped with, i am on 1.17 i think which was the 1st.


Here never. The memory i have is 96GB exactly as the one in the review.


I cannot do that but i never faced a problem


This is certain. Too many disks overheat the machine in the disk side. An extra fan is a must in big installations.


This is a no-go ... you need a fan on the bottom of the machine underneath it to cool the nvme's
I have 3 NVME and extra NVME as boot in my proxmox in the wifi slot with the adapter mentioned before.
No problems what so ever for months with proxmox.
Well... I really don't care if the nvme dies or not, On the other ms-01 i have different nvme connected and with the same fan curve their reported temperature is way lower. If any of them dies I'll just replace it. Important data from these drives are replicated to hdd pool either way.
 

caplam

Member
Dec 12, 2018
63
15
8
i have a ms01 with 3 nvme ssd, 64Gb ram and a hba 9300. The bios is still the 1.17. And this i fixed my heating problem with a fan below and another one above the ms01, i have never had a failure.
I run unraid with 70 containers and 5 vm. My actual uptime is 69 days. I will bring it down soon and repaste it as i'm completely redoing my network.
 
  • Like
Reactions: BlueChris

BlueChris

Active Member
Jul 18, 2021
155
56
28
53
Athens-Greece
Well... I really don't care if the nvme dies or not, On the other ms-01 i have different nvme connected and with the same fan curve their reported temperature is way lower. If any of them dies I'll just replace it. Important data from these drives are replicated to hdd pool either way.
Oh ok no worries then. I personally don't have the time and the money to not care in the setup of the machine to be ok after for a long time.
 

craigmcintosh

New Member
Nov 8, 2024
1
0
1
Hi I am Craig McIntosh new to the forum and new minisforum ms-01 purchaser, bought the MS-01 to add extra nvme drives to the pcie slot. been looking at every post for the last 5 hours and still dont know if there is a QUAD NVME PCIE card that works on the minisforum ms-01 as I want to run the 3 slots plus an addition 4 slots on UNRAID
 

trickerz

New Member
Jul 31, 2024
2
1
1
Hi I am Craig McIntosh new to the forum and new minisforum ms-01 purchaser, bought the MS-01 to add extra nvme drives to the pcie slot. been looking at every post for the last 5 hours and still dont know if there is a QUAD NVME PCIE card that works on the minisforum ms-01 as I want to run the 3 slots plus an addition 4 slots on UNRAID
The PCIe area is not big enough for a quad NVME card. The only option you have is to get the adapter that turns the 3 NVME into 6 and add another 2x PCIe card. I'm not sure if those adapters are shipping yet, though, and it'll make a lot of the slots 1x PCIe lanes. Minisforum MS-01 6x M.2 Upgrade Card Review
 
  • Like
Reactions: craigmcintosh

anewsome

Active Member
Mar 15, 2024
125
125
43
The PCIe area is not big enough for a quad NVME card. The only option you have is to get the adapter that turns the 3 NVME into 6 and add another 2x PCIe card. I'm not sure if those adapters are shipping yet, though, and it'll make a lot of the slots 1x PCIe lanes. Minisforum MS-01 6x M.2 Upgrade Card Review
Let me be the first to inform you, there's not enough room for any card that'll hold 4x 2280 M.2 NVME drives. There's a number of people who've managed 2 NVMEs in the PCIe slot, which of course requires a card with a switch built in. The motherboard can't bifurcate the x8 slot.
 
  • Like
Reactions: craigmcintosh

anewsome

Active Member
Mar 15, 2024
125
125
43
yes it limits it but still the speed of the array is more than 2GB/s which is more than I can saturate over the 10gig network.
I had the same with my MS01 cluster. I used all 3 internal NVME for CEPH and yes the speed of CEPH over the cluster was limited by the slowest slots, it's still much faster than any spinning disk arrays I have. The CEPH pool is *probably* faster overall with them included than it would be without them. It's certainly a bigger pool. No matter how matter OSD I have, I probably always seem to need more.
 

xternal

New Member
Jul 31, 2023
10
3
3
I don't really need the power of an i9 so wondering if anyone has experience with the i5 version and the thermal properties of it? They all have the same igpu which is mainly what I am interested in for hardware transcoding, along with with hosting proxmox, truenas and some containers.
 

Anung_Un_Rama

New Member
Oct 24, 2024
1
0
1
Running an i5-12600H unit here, 24/7.
Re-applied the CPU thermal grease with Noctua NT-H2 and put the unit onto a laptop cooling pad with two 12 fans, powered by USB, constantly.
Here's what I get 99% of the time, since it's a re-streaming headless sever with only a few docker containers:
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +42.0°C (high = +100.0°C, crit = +100.0°C)
Core 0: +42.0°C (high = +100.0°C, crit = +100.0°C)
Core 4: +34.0°C (high = +100.0°C, crit = +100.0°C)
Core 8: +41.0°C (high = +100.0°C, crit = +100.0°C)
Core 12: +36.0°C (high = +100.0°C, crit = +100.0°C)
Core 16: +39.0°C (high = +100.0°C, crit = +100.0°C)
Core 17: +39.0°C (high = +100.0°C, crit = +100.0°C)
Core 18: +39.0°C (high = +100.0°C, crit = +100.0°C)
Core 19: +40.0°C (high = +100.0°C, crit = +100.0°C)
Core 20: +39.0°C (high = +100.0°C, crit = +100.0°C)
Core 21: +39.0°C (high = +100.0°C, crit = +100.0°C)
Core 22: +39.0°C (high = +100.0°C, crit = +100.0°C)
Core 23: +39.0°C (high = +100.0°C, crit = +100.0°C)
P.S. All CPU TURBOs are off in BIOS... Don't need them.
 
Aug 20, 2023
90
51
18
Running x3 MS-01 here on original bios since they were released and same 96Gb memory kit mentioned in OP.
Absolutely no issues at all that were the fault of MS-01.
Only crash that I have seen is when allowing Linux bridge to use all vlans on network config:
bridge-vids 2-4095 would eventually cause a kernel panic so I change to bridge-vids 2-100 and this never crashes.
10G card in MS-01 is hardware limited to a maximum 200 vlan

1731736319391.png
 

BlueChris

Active Member
Jul 18, 2021
155
56
28
53
Athens-Greece
Running x3 MS-01 here on original bios since they were released and same 96Gb memory kit mentioned in OP.
Absolutely no issues at all that were the fault of MS-01.
Only crash that I have seen is when allowing Linux bridge to use all vlans on network config:
bridge-vids 2-4095 would eventually cause a kernel panic so I change to bridge-vids 2-100 and this never crashes.
10G card in MS-01 is hardware limited to a maximum 200 vlan

View attachment 40031
Oh yes, i had the same problem with errors in proxmox but without a real problem and i spoted the solution in general searching about the x710 and linux. I had also put that there many months ago and all the problems in log dissapeared.

Something extra that i have according to info i found in the internet is that i disabled the offload

Code:
auto vmbr0

iface vmbr0 inet static

    address 192.168.5.16/24

    gateway 192.168.5.1

    bridge-ports enp6s0f1np1

    bridge-stp off

    bridge-fd 0

    bridge-vlan-aware yes

    bridge-vids 2-100

    offload-rx-vlan-filter off
Sorry i havent mentioned it here.

P.S. I see that you have all the cards in one vbmr, how you control that from where the traffic will pass?
 
  • Like
Reactions: goletsa