Minisforum MS-01 PCIe Card and RAM Compatibility Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

killroy1971

New Member
Jun 19, 2021
6
2
3
Okay. The sled is for U2 drives (drat). Guess I'm buying some NVMe drives instead and using my SSDs in a NAS box.
 

BlueChris

Active Member
Jul 18, 2021
154
54
28
53
Athens-Greece
After messing with a handful of dual M.2 PCIe cards, I finally got one that fits the MS-01 with bifurcation running on the card itself.

TXB122 PCIe 3.1 x8 ASM2812 from AliExpress

Now have four 4TB storage drives and a 1TB as the boot drive.

View attachment 37167

Most cards with bifurcation are too long, but this one had a perforation on the PCB to snap it off if you didn't need to mount 22110 NVMe's. Also did the mod with covering the PCIe pins with captain tape, so all of my 64GB of RAM shows up just fine.

Ran some benchmarks and the two drives on the PCIe card run twice as fast reads compared to the two drives in the native M.2 slots (same writes).

Now off to test redundant software RAIDs.

View attachment 37168

View attachment 37169
I have the same card but the chipset is reaching almost 100c on this card. Once the 1 nvme and one m2 usb3 card that i have on it, dissapeared from proxmox and only a restart solved it.
This is one of the reasons i took out the case of the MS-01 and i did install the MS-01 in a HTPC case and now 2 x 120mm noctua's throw air on it and it work fine.

Try to push the disks and see what you get...

Has anyone tried a quad NVMe pcie adapter that had onboard bifurcation like this https://www.amazon.com/gp/aw/d/B0CCNL7YD8/ref=dp_ob_neva_mobile

I wonder since two ssd’s go on the back of the card, if they’d be super hot or even fit at all?
Read my commend above, the heat must be enormous, not only for the underside NVME's but also from the chipset of the card.
 

anewsome

Active Member
Mar 15, 2024
112
108
43
After messing with a handful of dual M.2 PCIe cards, I finally got one that fits the MS-01 with bifurcation running on the card itself.

TXB122 PCIe 3.1 x8 ASM2812 from AliExpress

Now have four 4TB storage drives and a 1TB as the boot drive.
I have this same card too, although I got mine off Amazon. The system I originally bought it for DOES support bifurcation, but I wanted a card that would work in just about any system. In my testing, it worked well, was plenty fast and I was pleased with it. I also had to snap mine to fit it into a rack server that was tight on space, so it'll only be running 2280 drives from here on out.

Pretty sure it was reported earlier in this thread to be known working in the MS01, but glad to see you got it working in the MS01.

I also see you didn't skip the kapton tape on the pins. From some of the issues I see on forums, not sure if everyone got the memo that the pins need to be taped.

I wonder if this will not be needed after a bios update?
 

LinkAgg_Junkie

New Member
Aug 12, 2020
6
6
3
Has anyone tried a quad NVMe pcie adapter that had onboard bifurcation like this https://www.amazon.com/gp/aw/d/B0CCNL7YD8/ref=dp_ob_neva_mobile

I wonder since two ssd’s go on the back of the card, if they’d be super hot or even fit at all?
The card is too long, and there's not enough room under the PCIe card for barely anything. When I was testing with a Gigabyte CMT4032 (no rear NVMe's), even the tiny chipsets on the back of the PCIe card were interfering with components on the MS01 motherboard.
 

johnknierim

New Member
Aug 1, 2022
26
18
3
After messing with a handful of dual M.2 PCIe cards, I finally got one that fits the MS-01 with bifurcation running on the card itself.

TXB122 PCIe 3.1 x8 ASM2812 from AliExpress

Now have four 4TB storage drives and a 1TB as the boot drive.

View attachment 37167

Most cards with bifurcation are too long, but this one had a perforation on the PCB to snap it off if you didn't need to mount 22110 NVMe's. Also did the mod with covering the PCIe pins with captain tape, so all of my 64GB of RAM shows up just fine.

Ran some benchmarks and the two drives on the PCIe card run twice as fast reads compared to the two drives in the native M.2 slots (same writes).

Now off to test redundant software RAIDs.

View attachment 37168

View attachment 37169
 
Last edited:

Zagor64

New Member
Jan 24, 2024
6
2
3
Looks like btop.
I think you are right. It's funny because I have used btop before and I didn't remember having per core temps (I am guessing I usually run it in VMs and they didn't have access to the sensors). Anyway, I installed btop on proxmox and it sure does look just like it. Thanks!
 
  • Like
Reactions: Phenic

Quadrapole

New Member
May 14, 2024
3
2
3
I have a LSI 9206-16E PCI-E 3.0 x8 HBA IT MODE w/ SAS2308 and it doesn't recognize the harddrives in unraid.

I can see

IOMMU group 20: [1000:0087] 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
IOMMU group 21: [1000:0087] 05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

in Unraid but it doesn't see the drives at all.

I think there is another person in this thread that also mentioned his LSI 9206 not working, but that his might be dead.

Does anyone know anything I can try? Would taping the pins help?
 

chris4prez

New Member
May 19, 2024
3
0
1
Anyone having issues putting a 25G NIC in? Trying to use an Intel 810 I have sitting around which is 4.0 x8 but seems to be not registering for full PCIe bandwidth:

root@pve1:~# dmesg -T | grep "ice 0000"
[Tue Jun 4 20:59:07 2024] ice 0000:01:00.0: The DDP package was successfully loaded: ICE OS Default Package version 1.3.36.0
[Tue Jun 4 20:59:07 2024] ice 0000:01:00.0: 15.753 Gb/s available PCIe bandwidth, limited by 16.0 GT/s PCIe x1 link at 0000:00:01.0 (capable of 126.024 Gb/s with 16.0 GT/s PCIe x8 link)

I'm also running 96G Crucial RAM, 2 Samsung 22110s RAID1, and 1 Samsung U.2 3.84TB all which has been stable for days. Just added the NIC today as preparing to build out the rest of my cluster and run CEPH across the U.2 via the 25G NIC. Wondering if anyone has suggestions or I need to revert to some ConnectX 4 Lx's
 

BlueChris

Active Member
Jul 18, 2021
154
54
28
53
Athens-Greece
Anyone having issues putting a 25G NIC in? Trying to use an Intel 810 I have sitting around which is 4.0 x8 but seems to be not registering for full PCIe bandwidth:

root@pve1:~# dmesg -T | grep "ice 0000"
[Tue Jun 4 20:59:07 2024] ice 0000:01:00.0: The DDP package was successfully loaded: ICE OS Default Package version 1.3.36.0
[Tue Jun 4 20:59:07 2024] ice 0000:01:00.0: 15.753 Gb/s available PCIe bandwidth, limited by 16.0 GT/s PCIe x1 link at 0000:00:01.0 (capable of 126.024 Gb/s with 16.0 GT/s PCIe x8 link)

I'm also running 96G Crucial RAM, 2 Samsung 22110s RAID1, and 1 Samsung U.2 3.84TB all which has been stable for days. Just added the NIC today as preparing to build out the rest of my cluster and run CEPH across the U.2 via the 25G NIC. Wondering if anyone has suggestions or I need to revert to some ConnectX 4 Lx's
What card exactly is it? is it HPE branded or anything? maybe you need to enter the card bios if is shown in boot to alter some settings.
Also maybe in the bios the card is shown something.. have you checked everywhere?
 

chris4prez

New Member
May 19, 2024
3
0
1
What card exactly is it? is it HPE branded or anything? maybe you need to enter the card bios if is shown in boot to alter some settings.
Also maybe in the bios the card is shown something.. have you checked everywhere?
E810-XXVDA2 - I also updated to the latest Intel 810 FW which did not fix anything either. Confirmed all BIOS settings are in order with APSM disabled all the standard passthrough stuff on/off does nothing for it. Nothing odd or fancy set on the card BIOS/config settings either (when it shows up... more below)

Here is where it really gets interesting after further testing for a few hours last night....
-1 out of 3 boots of the system and the card disappears
-2 of a 12 boots it actually was picked up at a slightly faster rate, yet not close to what I believe is supposed to be supported by it and the PCIe slot.
[Tue Jun 4 23:32:06 2024] ice 0000:01:00.0: 31.506 Gb/s available PCIe bandwidth, limited by 16.0 GT/s PCIe x2 link at 0000:00:01.0 (capable of 126.024 Gb/s with 16.0 GT/s PCIe x8 link)​
vs.​
[Tue Jun 4 20:59:07 2024] ice 0000:01:00.0: 15.753 Gb/s available PCIe bandwidth, limited by 16.0 GT/s PCIe x1 link at 0000:00:01.0 (capable of 126.024 Gb/s with 16.0 GT/s PCIe x8 link)​

Additional details showing default config.
root@pve1:~# ethtool --show-priv-flags enp1s0f0np0
Private flags for enp1s0f0np0:
link-down-on-close : off
fw-lldp-agent : off
vf-true-promisc-support: off
mdd-auto-reset-vf : off
vf-vlan-pruning : off
legacy-rx : off

root@pve1:~# lshw -class network
*-network:0
description: Ethernet interface
product: Ethernet Controller E810-XXV for SFP
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:01:00.0
logical name: enp1s0f0np0
logical name: /dev/fb0
version: 02
serial: REMOVED_MAC_ADDR
capacity: 25Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet physical 1000bt-fd 25000bt-fd autonegotiation fb
configuration: autonegotiation=off broadcast=yes depth=32 driver=ice driverversion=6.8.4-3-pve firmware=4.50 0x8001d8ba 1.3597.0 latency=0 link=no mode=1024x768 multicast=yes visual=truecolor xres=1024 yres=768
 

LinkAgg_Junkie

New Member
Aug 12, 2020
6
6
3
I have the same card but the chipset is reaching almost 100c on this card.
Good to know. I've run stress test benchmarks for a long time on the NVMe's on the card, but not seen any issues. Maybe the fact these 990 Pro's are single sided makes a difference? NVMe's themselves never got over 45C. How are you seeing the temp on the card chipset?
 

Quadrapole

New Member
May 14, 2024
3
2
3
I have a LSI 9206-16E PCI-E 3.0 x8 HBA IT MODE w/ SAS2308 and it doesn't recognize the harddrives in unraid.

I can see

IOMMU group 20:[1000:0087] 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
IOMMU group 21:[1000:0087] 05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

in Unraid but it doesn't see the drives at all.

I think there is another person in this thread that also mentioned his LSI 9206 not working, but that his might be dead.

Does anyone know anything I can try? Would taping the pins help?
Just wanted to know that actually LSI 9206-16e DOES work.

My LSI card that I bought on ebay had misconfiguration and I followed this YT video

Then afterward all the drives showed up.
 

Dr3am

New Member
May 31, 2024
8
1
3
Noob here with the ms01 but have been following the thread. Currently have the 12600h variant with 64gb of ram, a 2tb U.2 gen 4x4.

I am looking for some advice on which hba card to get.

I am looking to have 6-8 Sata or Sas SSD’s in an external drive cage.

does anyone have a suggestion for which generation LSI HBA or other, I may need to run these drives that has trim support?

I have experience with spinning rust using older LSI cards like the 9211, but no experience with ssd support. It will be going in the 16x slot on my ms01. I can 3d print and or create a fan shroud if the card runs too hot.
 
  • Like
Reactions: Tlex

Tlex

New Member
May 12, 2024
14
3
3
Noob here with the ms01 but have been following the thread. Currently have the 12600h variant with 64gb of ram, a 2tb U.2 gen 4x4.

I am looking for some advice on which hba card to get.

I am looking to have 6-8 Sata or Sas SSD’s in an external drive cage.

does anyone have a suggestion for which generation LSI HBA or other, I may need to run these drives that has trim support?

I have experience with spinning rust using older LSI cards like the 9211, but no experience with ssd support. It will be going in the 16x slot on my ms01. I can 3d print and or create a fan shroud if the card runs too hot.
following :)

btw, what kind of external enclosure were you looking at ? Totally new here to external drive cages but I'm looking for a 2-3u short depth rackmount enclosure.
 
  • Like
Reactions: Dr3am

Dr3am

New Member
May 31, 2024
8
1
3
I have the same card but the chipset is reaching almost 100c on this card. Once the 1 nvme and one m2 usb3 card that i have on it, dissapeared from proxmox and only a restart solved it.
This is one of the reasons i took out the case of the MS-01 and i did install the MS-01 in a HTPC case and now 2 x 120mm noctua's throw air on it and it work fine.

Try to push the disks and see what you get...


Read my commend above, the heat must be enormous, not only for the underside NVME's but also from the chipset of the card.
does the motherboard use standard hole spacing for itx ?