Minisforum MS-01 PCIe Card and RAM Compatibility Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Aug 20, 2023
90
52
18
Oh yes, i had the same problem with errors in proxmox but without a real problem and i spoted the solution in general searching about the x710 and linux. I had also put that there many months ago and all the problems in log dissapeared.

Something extra that i have according to info i found in the internet is that i disabled the offload

Code:
auto vmbr0

iface vmbr0 inet static

    address 192.168.5.16/24

    gateway 192.168.5.1

    bridge-ports enp6s0f1np1

    bridge-stp off

    bridge-fd 0

    bridge-vlan-aware yes

    bridge-vids 2-100

    offload-rx-vlan-filter off
Sorry i havent mentioned it here.
Why disable offload?
With regards to why my nics are in the same bridge is just for troubleshooting , only 1 port on the switch is enabled.
 
Last edited:

pimposh

hardware pimp
Nov 19, 2022
290
160
43
I never used it that way (pcb without case). Switch chip is not cooled, M.2 are (via tpad) cooled by top cover. Guessing might work, as long as drives are cooled.
 

Drizzik

New Member
Oct 15, 2024
6
1
3
Here are some updates about my 6 M.2 MS-01 (5x4TB + 1x500TB).
Below, you can see the temperatures under basic usage (1 Plex stream with transcoding, 1 VM running, ~20 containers running).

For advanced usage (all M.2s running), their temperature goes up to 45°C.
The CPU never exceeds 65°C during benchmarks.

(I wanted to try Unraid after 4-5 years of using TrueNAS Core and Scale, and I am happy with it so far)
unraid.png

Just to remind you, each M.2 has a heatsink, and I have a fan blowing air with this 3D-printed cooling system.
 

ChargeItAll

New Member
Nov 10, 2024
1
0
1
There are a few options with this thing. Oculink 8i is probably the best as you should get 8 full lanes of PCIe. (128Gbps)

Oculunk 4x can be done with a pci card OR a m.2 adapter. Both would max out at 4 lanes (64Gbps)

And USB4/TB is another option although it caps out at 40Gbps.

I'm planning to test USB4 and Oculink 8i once it finally all gets delivered.

Looks like I'll be testing USB4 soon and OcuLink 8i late May. Either way I'll post some pictures and details and possibly tears when it happens.
How is the testing going?
 

qiang

New Member
Nov 23, 2024
4
1
3
Hey everyone,
I'd like to ask about the QNAP TL-D400S with pcie controller. In the first post this hardware is mentioned as supported and on YT there is a video where it seems to be working. I'm trying the third day to run it, but without any luck. This is my second MS-01, the first one I sent back just because of this issue ( I thought it was because of CPU I had - it was 12900), but now I have 13900, identical setup as on YT and this card is still not recognized by the OS. I was trying Windows, Ubuntu, Debian. Under any OS lspci doesnt show this controller. I was trying with disks and without. I almost check every setting in the BIOS. I've checked this controller in DELL server and it was recognized properly.
Does anyone have this NAS (QNAP 4-port SFF-8088 adapter from the QNAP TL-D400S JBOD kit.) and have it working?

Please help.

Regards
Jakub
I managed to make my 13900h ms01 connect TL-D400S with QXP-400eS-A1164. I can pass it through to my truenas vm. Waiting for my hdds to build a NAS.
 

hillxrem

New Member
Nov 1, 2024
1
0
1
Has anyone successfully connected a DEG1 from a PCIe card?
I have tried installing a PCIe card to add an Oculink port, a PCIe card to add an M2 slot and an adapter to convert the M2 to an Oculink port, but it does not work.
So far, my eGPU has never shown up in the lspci command, and on some cards, the BIOS did not even start up.
 

myn

New Member
Nov 18, 2024
6
2
3
I actually found the same exact QNAP disk controller from the review for sale on eBay so I've ordered it. It should stay cool and being short like that means it won't trap heat in underneath it.

I've ordered the i5 MS-01 to help keep the temperature down too. It operates at a slightly lower wattage.

Cable-wise I've ordered one of these: https://www.aliexpress.com/item/1005005830936415.html

And then one of these to house the disks in: https://www.aliexpress.com/item/1005004679669836.html

Hopefully that'll all work together. It's quite confusing trying to piece together the specifications.

I went for the backplane with 12Gb/SAS connector because this means the cable can have external terminations at both ends. My hope is that this will form a faraday cage and protect the signals from interference
Checking in. Did this end up working for you?
 

benjvfr

New Member
Feb 19, 2024
10
1
3
I managed to make my 13900h ms01 connect TL-D400S with QXP-400eS-A1164. I can pass it through to my truenas vm. Waiting for my hdds to build a NAS.
Did someone tried to use a QXP-1600eS-A1164 card with 2x TL-D800S ? Is passthrough to TrueNAS Scale VM working correctly and detect the 16 drives ?

I ask because I wonder if the fact that bifurcation is not supported by MS-01 could be a problem for this setup (QXP-1600eS-A1164 + 2x TL-D800S) .
 
Last edited:

jhbball2002

New Member
Jan 10, 2024
4
0
1
Any windows 11 users having difficulty installing Intel Network Driver 29.4? Keeps popping up through intel's driver and update software, but fails to install every time.
 

ilbarone87

New Member
Sep 4, 2023
1
0
1
hello,
I've bought an 13900H a month ago and put it work moving some of my VMs in there (proxmox cluster). 3 very light load VMs, 3 LXCs and a3 nodes k8s cluster. In total the average load is 26-27%.
The fan has started to spin constantly not very loud but that continuous whine can be annoying after a while, on top of that noticed that temps are going very high

Code:
root@zep-pve-02:~# sensors
nvme-pci-0100
Adapter: PCI adapter
Composite:    +56.9°C  (low  = -40.1°C, high = +119.8°C)
                       (crit = +129.8°C)
Sensor 1:     +75.8°C  (low  = -40.1°C, high = +139.8°C)
Sensor 2:     +56.9°C  (low  = -40.1°C, high = +119.8°C)

acpitz-acpi-0
Adapter: ACPI interface
temp1:        +27.8°C 

coretemp-isa-0000
Adapter: ISA adapter
Package id 0:  +86.0°C  (high = +100.0°C, crit = +100.0°C)
Core 0:        +85.0°C  (high = +100.0°C, crit = +100.0°C)
Core 4:        +76.0°C  (high = +100.0°C, crit = +100.0°C)
Core 8:        +81.0°C  (high = +100.0°C, crit = +100.0°C)
Core 12:       +86.0°C  (high = +100.0°C, crit = +100.0°C)
Core 16:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 20:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 24:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 25:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 26:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 27:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 28:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 29:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 30:       +79.0°C  (high = +100.0°C, crit = +100.0°C)
Core 31:       +79.0°C  (high = +100.0°C, crit = +100.0°C)

nvme-pci-5800
Adapter: PCI adapter
Composite:    +43.9°C  (low  = -273.1°C, high = +89.8°C)
                       (crit = +94.8°C)
Sensor 1:     +43.9°C  (low  = -273.1°C, high = +65261.8°C)
Sensor 2:     +57.9°C  (low  = -273.1°C, high = +65261.8°C)
1733144862292.png

I've read that repaste the thermal paste can improve the temps? Or could be something else?
Also with the recent discount bought a second ms-01 but a 12900H this time, in case should I repaste also that before i start using it?

Thanks
 
Feb 19, 2024
31
38
18
Has anyone successfully connected a DEG1 from a PCIe card?
I have tried installing a PCIe card to add an Oculink port, a PCIe card to add an M2 slot and an adapter to convert the M2 to an Oculink port, but it does not work.
So far, my eGPU has never shown up in the lspci command, and on some cards, the BIOS did not even start up.
Yes I got it working and it is passthrough to a windows vm for gaming and ollama vm. I am assuming you have powered off your server first as it is not hot pluggable? What cards are you using?

I am using these:
GLOTRENDS PA09-HS M.2 NVMe to PCIe 4.0 X4 Adapter with M.2 Heatsink for 2280/2260/2242/2230 M.2 NVMe SSD
1733216374974.png

chenyang Oculink SFF-8612 to PCI-E 4.0 NVME M.2 M-Key Host Adapter Support 2230/2242/2260/2280 for U.2 SSD & eGPU
1733216342758.png


And below does not work so stop wasting your time lol:
1733216479405.png
 
Last edited:
  • Like
Reactions: DMClark

myn

New Member
Nov 18, 2024
6
2
3
I'm new to the MiniPC/Servers space. Currently using an old Dell R710 2U Rackmount server hosting VM's via ESXI using 4 SATA HD's running in a RAID 10.

I bought a MS-01 and am looking at migrating off of the R710 and am looking for a good bang for the buck external storage solution that I can put my existing 4 Hard Drives into and host VHD's like I do today in my R710.

Does anyone have any thoughts on a small, quiet and relatively cheap ($< 300) approach?

Appreciate it!
 
Last edited:

MrNova

New Member
Dec 10, 2024
3
0
1
I've been bashing my head against the wall trying to figure out some issues I am seeing with LACP bonding the 10g NICs and wondering if anyone has come across anything similar. I have 4 MS01s, two i9-13900H (v1.22 bios, 96gb RAM, two NVME drives each) and two i5-12600H (v1.26 bios, 64gb RAM, two NVME drives each). Three of these are running in a Proxmox ceph cluster, and the 4th i5 lives in a separate three node ceph cluster. These clusters are running in separate networks/locations. The 10gig ports are all connected via DACs to USW Aggregation switches, with LACP bonding and layer 3+4 hashing policies enabled in Proxmox and on the switch. Links are being reported as 20gig. Everything looks great, except I have been unable to get >10gig speeds even when using multiple clients in iperf and when benchmarking ceph despite going through pages and pages of forum and reddit posts trying every config tweak I've come across. Using Proxmox 6.8.12 and Ceph 18.2.4 in both clusters.

What's really driving me crazy here is in the second cluster that has two non MS01 nodes, I AM able to see the expected increased speed. Ceph sequential reads are clocking in at ~15gig and I can get nearly 20gig when running multiple iperf clients. But here is the kicker; I can only see the speedup in iperf if one of the NON MS01 nodes is the server. As soon as I try using the MS01 as the server, I drop back down to 10gig total across multiple clients. The only difference I've spotted so far is that the MS01's are using the i40e NIC driver and the others are using ixgbe.

Other than this issue, these guys have all been rock solid with reasonable temps and vpro working across the board.

EDIT: Noticing this in the logs
kernel: i40e 0000:02:00.0: PCI-Express: Speed 8.0GT/s Width x4
kernel: i40e 0000:02:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.
kernel: i40e 0000:02:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.

Possible there are insufficient PCIE lanes for both NICs running at capacity in parallel?
 
Last edited:

ajeffco

New Member
Dec 12, 2024
1
0
1
Give me a little more time and I will try to redesign it to make it scalable, which takes a lot of time :). The case is 2U as it is design to go big or go home with the fan (120 mm). The one I posted above fits perfectly in 10" rack. To make it completely scalable going to 19" rack with one or two ms01 side-by-side I would need to give a choice for different use cases. I will let you guys know when it's done :)
I'd love the 10" STL version as well, if you're ok with releasing it.
 

SwanLab

New Member
Dec 7, 2024
1
0
1
I got the MS-01 with the Black Friday deal with:
- i9-13900H
- 32GB RAM
- 1TB SSD (not sure brand or speed it comes with)
still sitting in the an unopened Amazon box next to me waiting for my 3d printed 140mm fan cover to be done.

I also have available:
- x4 4TB Samsung 990 Pro (PCIe 4.0 x4)
- x1 2TB Samsung 980 Pro (PCIe 4.0 x4)
- x1 256GB Samsung 980 Pro (PCIe 4.0 x4)
- x1 1TB Samsung 970 Evo (PCIe 3.0 x4)

I plan on running PVE with 1 Debian VM for Portainer/Dockers, 1 VM for TrueNAS (if possible), and ~1-2 other VM's to tinker and explore with as this is my first journey in homelabing.

My question is, how would you best utilize these these m.2 NVMe drives I have available? This whole lane thing is hard for me to completely wrap around my brain.

The little I have learned for sure so far is I should probably use the 1TB Samsung 970 Evo since it is the slowest as the main ProxMox drive. Thoughts?

Can I run at least x2 990 Pro's in the other slots with max speeds? I have a feeling the answer is no.
What if I take out the WiFi chip and buy the right sized m.2?
Can I utilize the PCIe 4.0 x16 slot for any type of expansion to help out? Maybe attaching to a JBOD?

I have started to look into PCIe slots used for Synology Expansions, and slowly going over this massive thread... perhaps adding more space that way, but it's my understanding that will screw with the speeds of the other slots...?

TLDR
Basically, what would you guys do with what I have, and what can I do to beef this thing up to its max potential for storage and speed?

Or perhaps I should return the MS-01 and do a completely custom build? The small form factor is not an issue as I was planning on printing a 1U rack -- I'm just looking for low power and quiet fans.



Here are some updates about my 6 M.2 MS-01 (5x4TB + 1x500TB).

Just to remind you, each M.2 has a heatsink, and I have a fan blowing air with this 3D-printed cooling system.
Please.. Where can I get this prototype board? I am printing the same cooling system now and would be willing to help beta test. What specific drives are you using?
 
Last edited: