Minisforum MS-01 + QNAP JBOD issues

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

just_a_person

New Member
Apr 18, 2024
16
7
3
One note worth mentioning: I was using 16TB/18TB drives for my badblocks tests. I found that badblocks could pass all 4 passes with no errors on a smaller 1TB drive, and that errors would only crop up on larger drives, since the errors are very infrequent.
 

orinivan

New Member
Jun 4, 2024
9
2
3
One note worth mentioning: I was using 16TB/18TB drives for my badblocks tests. I found that badblocks could pass all 4 passes with no errors on a smaller 1TB drive, and that errors would only crop up on larger drives, since the errors are very infrequent.
I'll keep that in mind when I test my 14s and 2s. Thanks.
 

just_a_person

New Member
Apr 18, 2024
16
7
3
I’m having a somewhat similar issue with a tl-d400s enclosure. The qxp-400es-a1164 card is not recognized at all by the minisforum so I got a cheap LSI card on eBay. This card recognizes my drives (4x16tb seagates) but there are frequent problems reported in dmesg and Unraid just fails drives seemingly at random. SMART tests are all passing (extended and short) for all drives.
I am a bit at a loss as what to try next. Different hba? Return the ms-01?

the qxp card works just fine on my gaming pc so that points to the minisforum being the culprit. Tried both bios versions.

Help?
From what I saw in the PCIE compatibility thread, what you're experiencing might be a separate issue altogether with the smbus, which I think is solved with some tape on the smbus pins.

Here's also a video on the topic:

That said, no clue whether you would just be trading that error for the one I was experiencing instead, which would lead to long-term headaches.
 

dvdplm

New Member
Jan 11, 2024
8
4
3
Here's also a video on the topic:

That said, no clue whether you would just be trading that error for the one I was experiencing instead, which would lead to long-term headaches.
I did the taping, although minisforum support told me to tape both sides which is different from what hardwarehaven is doing, and still no dice. I also never saw any of the commonly reported issues (not POSTing, RAM instability or erroneous enumeration etc).
 

orinivan

New Member
Jun 4, 2024
9
2
3
I'll post the final results of the burn-ins of my 6 TB and 8 TB drives, in a day or so, when they finish, but I can see that I'm definitely having verify errors - though only on certain drives, and not all, as OP had.

Is this because the MS-01 doesn't use ECC RAM, or something else?
 

orinivan

New Member
Jun 4, 2024
9
2
3
After doing a full badblock run (using bht) against my 6 TB and 8 TB drives, I show the same thing as OP: verify errors (though far fewer than they had), with no errors reported by the subsequent smartctl tests (short, conveyance, and long, for the 6s; short and long, for the 8s).

Of the eight 6s, one drive had 1 verify error, that occurred during the 1st of the four read passes.
Of the eight 8s, one drive had 2 verify errors, and two drives had one verify error each. All were found during the 4th of the four read passes.
 

just_a_person

New Member
Apr 18, 2024
16
7
3
I don't think it has to do with ECC RAM alone, since bit flip errors on new ram should be very rare (I also wasn't experiencing the same errors on an AMD machine without ECC). There could however be issues with the motherboard/firmware itself (or this is somehow related to the MS-01 using a laptop CPU).
 

ShengLong

New Member
Apr 8, 2024
1
0
1
I'm very concerned by this thread, as I bought two MS-01s, and two TL-D800Ses, both running TrueNAS Scale, and am planning to buy have purchased two more MS-01s and two TL-D1600Ses, to also run TrueNAS Scale. I have the same memory and CPU option as OP. I'm using the HBA that came with the QNAP JBOD, in both cases.

I'm running bht now (one unit has 8 x 8 TB drives, the other, 8 x 6 TB) ...

Edit: bht has done two complete sets of write, then read, passes, on my 6 TB drives. It reported one error, on one drive, at ~81% of the way through the first read pass, and nothing on the second. It's about 25% through the second read pass on my 8 TB drives, and no errors have been found on any disks yet. It doesn't look to me like I'm having the issues that OP did. It seems I am having the issue, just not to the extent that OP had.
What bios settings are you using? I installed the sff 8088 card and it disables my 2.5 Gb ethernet ports ...as soon as I take the card out they work again
 

orinivan

New Member
Jun 4, 2024
9
2
3
What bios settings are you using? I installed the sff 8088 card and it disables my 2.5 Gb ethernet ports ...as soon as I take the card out they work again
Default, as it comes out of the box. The only change I made was to disable Secure Boot.
 

orinivan

New Member
Jun 4, 2024
9
2
3
I’m having a somewhat similar issue with a tl-d400s enclosure. The qxp-400es-a1164 card is not recognized at all by the minisforum so I got a cheap LSI card on eBay. This card recognizes my drives (4x16tb seagates) but there are frequent problems reported in dmesg and Unraid just fails drives seemingly at random. SMART tests are all passing (extended and short) for all drives.
I am a bit at a loss as what to try next. Different hba? Return the ms-01?

the qxp card works just fine on my gaming pc so that points to the minisforum being the culprit. Tried both bios versions.

Help?
Did you ever get it to work? I have a QXP-400es-A1164, two QXP-800es-A1164, and two QXP-1600es-A1164 cards all in MS-01s, recognized and running under TrueNAS Scale (based on Debian 12 Bookworm, which I also tested the cards under, before installing TNS). So, clearly, they work.
 

ytzelf

New Member
Mar 5, 2024
5
0
1
Hi guys, any update on this? curious as I seem to be in the same situation (QXP-400es-A1164 not recognized by the OS)
 

orinivan

New Member
Jun 4, 2024
9
2
3
I don't know why they aren't working for either of you. All five of my QXP-[4,8,16]00es-A1164 cards are recognized.
 

argumentum

New Member
Jul 7, 2024
2
0
1
I'm very concerned by this thread, as I bought two MS-01s, and two TL-D800Ses, both running TrueNAS Scale, and am planning to buy have purchased two more MS-01s and two TL-D1600Ses, to also run TrueNAS Scale. I have the same memory and CPU option as OP. I'm using the HBA that came with the QNAP JBOD, in both cases.
... ...
Same setup here. Changed SCALE to CORE. No more problems. Much more stable in recovering pools**.
**I removed HDDs and pushed 'em back in while running live, and SCALE could not restore the pools while CORE did.

All the "goodies" I wanted from SCALE I can get from Proxmox. For the NAS feature, CORE was the way to go for me.
 

martinm2

New Member
Nov 7, 2021
9
3
3
What bios settings are you using? I installed the sff 8088 card and it disables my 2.5 Gb ethernet ports ...as soon as I take the card out they work again
I had this something very similar to this.

Took me a while to work out the cause - it turns out that the "stable" ethernet adapter naming scheme provided by systemd is anything but. Adding a PCIe card can sometimes cause devices to be enumerated differently, which changes the names of ethernet adapters. There's then no matching entry in /etc/network/interfaces, so the ports don't get brought up.

To fix this, I identified the MAC address for each port by running "ip a" and then created a link file for each in /etc/systemd/network like:

Code:
/etc/systemd/network/10-persistent-net-myrj1.link
---
[Match]
MACAddress=aa:bb:cc:dd:ee:ff

[Link]
Name=myrj1
--
.

Reboot and "ip address" now shows a "myrj1" name for my first RJ45 port with a MAC address of aa:bb:cc:dd:ee:ff, and it will stay like that regardless of PCIe topology. You can allocate names as you like, but you're supposed to stay clear of standard names for ports (eth0 etc.) which is why I chose the "my" prefix. Then go and fix up /etc/network/interfaces and ifup each one.

I'm going to do this for every port on all of my servers from now on, not just for stability but because names like myrj1 and mysfp2 are much easier to work with than the likes of enp10s0e1np1.

More details here - NetworkInterfaceNames - Debian Wiki
 
Last edited:
  • Like
Reactions: Whatever

waicool20

New Member
Jul 16, 2024
2
0
1
I have the same issues trying to passthrough my QXP-800es-A1164 into a openmediavault vm on Proxmox, I've given up on that and tried using disks passthrough and it seems to be a lot more stable.
 

MetalPhreak

New Member
Jan 19, 2024
27
19
3
I had lots of issues under load with the a1164 drivers in a few OS. FreeBSD based TrueNAS hated it. Anything with a newish linux kernel seems fine (including the linux based truenas - I cant remember their naming scheme). The older versions of the QNAP card came with a different chipset which has very poor driver support - such that QNAP moved to ASMEDIA. Might be worth testing with an LSI/Avago/Broadcom based HBA.

I run mine with all 4x 1164 chips passed into a VM on proxmox.
 
Last edited:

waicool20

New Member
Jul 16, 2024
2
0
1
I had lots of issues under load with the a1164 drivers in a few OS. FreeBSD based TrueNAS hated it. Anything with a newish linux kernel seems fine (including the linux based truenas - I cant remember their naming scheme). The older versions of the QNAP card came with a different chipset which has very poor driver support - such that QNAP moved to ASMEDIA. Might be worth testing with an LSI/Avago/Broadcom based HBA.

I run mine with all 4x 1164 chips passed into a VM on proxmox.
I have tested with an LSI 9207-8e on proxmox, it does not work, any drive bays left empty will show up as a 100M ASM Config device and cause timeout issues that you can see through dmesg. Technically you can still mount any drives that are inserted into the unit but IO is extremely slow (down to 40Mb/s) and causes boot to be extremely slow even if the drive youre booting from is not part of the unit
 

voodek

New Member
May 7, 2024
4
0
1
I've been able to use lsi-9200e successfully on proxmox with passthrought to internal truenas scale vm. However in order to work, all the drives has to be inserted. If at least one is missing, then the ASMconfig device shows up instead and cause entire HBA to run slow and eventually disconnect the drives. Also I've tested multiple firmware versions of the lsi hba, and this was working fine on some older versions - with the newest one available I'm getting disconnections no matter what.
 

Ajdthomson

New Member
Dec 28, 2012
1
0
1
Hello, did anybody get resolution of the reported issues in this thread with their MS-01 and QNAP JBOD with the bundled PCI-e card? I was ready to purchase this for my homelab as a really neat solution but then I found this thread... was hoping things had improved via firmware, bios updates etc.?
 

unexplodedscotsman

New Member
May 6, 2024
1
0
1
Was wondering the very same thing. If all the problems are ironed out, a MS-01 + QNAP JBOD looks like a great solution for an Unraid build.