Minis forum MS-A2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Raice

Member
Jul 12, 2017
81
21
8
43
Guangzhou, China
Hello, fellow A2 users. Got my own A2 to use as main workstation. 9955HX, Kingston memory 5600@C40 with reflashed SPD.

The noise is awful. Disassembled, repasted, TJmax to 78, PBO -1000 - still noisy. May be because I have rather hot U.2 SSD (around 60C)
Ordered new larger case for better cooling.

I made small modification to increase space for rails. I can share STL if anybody need it. Should arrive in a couple of days along with fan and PWM controller.
A2_case.png

Also bought a heatsink from Minisforum and sent it to the company which will try to make a custom waterblock.
 

misku

New Member
Sep 19, 2017
2
1
3
56
Hello, fellow A2 users. Got my own A2 to use as main workstation. 9955HX, Kingston memory 5600@C40 with reflashed SPD.

The noise is awful. Disassembled, repasted, TJmax to 78, PBO -1000 - still noisy. May be because I have rather hot U.2 SSD (around 60C)
Ordered new larger case for better cooling.

I made small modification to increase space for rails. I can share STL if anybody need it. Should arrive in a couple of days along with fan and PWM controller.

Also bought a heatsink from Minisforum and sent it to the company which will try to make a custom waterblock.
Thanks @Raice! I also do confirm the fan noise is dreadful. It would be great if you could share the STLs and let us know how the build worked out after your gear arrives.
 
  • Like
Reactions: Raice

Markn12

New Member
Sep 20, 2025
3
0
1
If you limit temp to like 72c you will get 95% of the performance with less noise. You can also mess with the fan power limits to do this while keeping the temps in the 70s and noise low. I wish you could use ryzen master on these CPUs it would be so much easier to just lower the voltage which would have a drastic decrease in temps and fan noise.
 

johnknierim

New Member
Aug 1, 2022
27
20
3
North Korea seems to be very nice country according to Kim. Same as Minisforum reliability according to numbers of issues reported by users.
No exaggeration when I say I have purchased over 20 minisforum systems. I have built two Proxmox clusters with the 3 X MS-01 and now I have 3 MS-A2s and UM780, UM890, AtomMan G7pt and several more and I only ever had a problem with one of them, returned it to Amazon and ordered another which works just fine. I only buy them from Amazon, to have at least 30 day window to return. I know my anecdotal experience does not represent the entirety of all purchased Minisforum systems. One thing to keep in mind is that these are reasonably complex systems where many of us are trying to do clustering, firewalls, etc. These are not normal end user practices and I realize that there is range of experience and aptitude in the technology hobbyist world.
 
  • Like
Reactions: nicoska

misku

New Member
Sep 19, 2017
2
1
3
56
No exaggeration when I say I have purchased over 20 minisforum systems. I have built two Proxmox clusters with the 3 X MS-01 and now I have 3 MS-A2s and UM780, UM890, AtomMan G7pt and several more and I only ever had a problem with one of them, returned it to Amazon and ordered another which works just fine. I only buy them from Amazon, to have at least 30 day window to return.
I can also confirm I got an MS-A2 from Amazon. It nicely detected my 128GB of RAM (24h memtest without an error) as well as SSDs (newest BIOS). It was speedy and overall a really nice system. I just couldn't stand a high-pitched noise coming from the system fan, even at lower speeds. CPU fan had no such problem. Returned within the 14-day window (EU) and got a full refund.

What's funny is that the noise was really hard to catch on a video and some people couldn't even hear it in real life. Maybe it's some frequency that only certain % of population can hear ¯\_(ツ)_/¯ Even though the high-pitched noise is muffled (clipped? trimmed?) on the film, I put it on YT, so you can validate yourselves:
 

Brian Stretch

New Member
Jan 26, 2017
15
2
3
I'm very tempted to order a MS-S1 MAX for the more robust cooling, built-in power supply and all that. As neat as the MS-A2 form factor is the S1 looks a lot more practical. Spendy, but neat. It may wind up as my daily driver Windows box if I do rather than VM hosting. Kinda disappointed in the S1's Realtek NIC but dropping in a surplus X710-DA2 would fix that if it's a problem. Anyone buy both the A2 and S1 and can compare?
 

wadup

Active Member
Feb 13, 2024
158
114
43
I'm very tempted to order a MS-S1 MAX for the more robust cooling, built-in power supply and all that. As neat as the MS-A2 form factor is the S1 looks a lot more practical. Spendy, but neat. It may wind up as my daily driver Windows box if I do rather than VM hosting. Kinda disappointed in the S1's Realtek NIC but dropping in a surplus X710-DA2 would fix that if it's a problem. Anyone buy both the A2 and S1 and can compare?
Lack of GPU support is the negative for me.
 

Brian Stretch

New Member
Jan 26, 2017
15
2
3
Lack of GPU support is the negative for me.
Acceptable tradeoff for me but good point. I'll stick with full-size machines when I want a high-power GPU. The S1 is a big step down from my Radeon 7900 but adequate and would free up a lot of desk space. Probably quieter than my current desktop too.
 

freegate

New Member
Apr 21, 2025
20
7
3
Guadeloupe
The upcoming MS-02 Ultra minisforum might meet your needs. It will support a dual-slot GPU. It seems that a low-profile RTX 4070 will be able to be installed.
 
Last edited:

nintra

New Member
Nov 16, 2025
2
0
1
Hello, I just bought a MS-A2 with a Ryzen 7 8700g. I wonder which BIOS version I should use for updating. I'm afraid I might trash the system with the wrong version.

CPU 7000 Series: Bios/F1WSA_DRG_1.02_250616A
CPU 9000 Series: Bios/F1WSA_FRG_1.02_250616a
 

VivienM

Member
Jul 7, 2024
45
10
8
Toronto, ON
Hello, I just bought a MS-A2 with a Ryzen 7 8700g. I wonder which BIOS version I should use for updating. I'm afraid I might trash the system with the wrong version.

CPU 7000 Series: Bios/F1WSA_DRG_1.02_250616A
CPU 9000 Series: Bios/F1WSA_FRG_1.02_250616a
Are you sure you have the MS-A2? I don't think they offer any Ryzen 8xxx...
 

VivienM

Member
Jul 7, 2024
45
10
8
Toronto, ON
Has anybody had issues with spontaneous reboots with recent Proxmox on these?

Mine was running happily on 6.14.8-2-pve (or at least I'm guessing that's the kernel it was running) for ~80 days, I had to shut everything down due to the power company doing maintenance, I boot it back up in kernel 6.17 (which I had installed before but never rebooted for), and now it's spontaneously rebooting every 8-10 hours. Not much in journalctl that I can tell. Tried the newest 6.14 kernel I had on there, same issue. Now went back to 6.14.8-2-pve, we'll see if that helps. But spontaneous reboots are a huge, huge problem for a virtualization host...
 
  • Wow
Reactions: name stolen

nintra

New Member
Nov 16, 2025
2
0
1
Are you sure you have the MS-A2? I don't think they offer any Ryzen 8xxx...
Yeah, you were right; it's a MS-A1. This reseller I bought it from is not offering any information or help. :/ Sry, for the disturbance.
 

flipper203

Member
Feb 6, 2024
58
15
8
Hi everyone,
sorry for the long post. I've been helped by IA to post a better english post (I'm french) and to give max infos.

I’m running into repeat storage pool corruption issues on my Minisforum MS-A2 homelab under Proxmox, and I’d appreciate insights or hardware compatibility experiences. Here’s a summary:

Hardware

  • Host: Minisforum MS-A2
  • RAM: 32GB DDR5 SODIMM (JM4800ASE-32G, 4800MT/s, dual rank, non-ECC)
  • Storage: 2× NVMe SSDs, healthy SMART status, used in RAID1 mirror
  • Proxmox Version: pve-manager/9.0.15/6ef4690b0bee651d
  • Kernel Version: 6.14.11-4-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-4 (2025-10-10T08:04Z) x86_64 GNU/Linux
Configuration & Workloads

  • System Disk: NVMe RAID1 mirror, tested with both LVM Thin+Ext4 and ZFS (same issues seen with both)
  • Backup Schedule: Nightly PBS backup, usually starts at 1am
  • File Storage: TrueNAS SCALE VM, used as NFS server for others (datastore and PBS targets are NFS shares on this VM)
  • Disk Usage: NVMe drives are far from full
  • RAM: Never reached 100% utilization, swap almost never touched
Problem Details

  • During PBS backup at 1am, swap failures occur (swap_info_get: Bad swap offset entry) and sometimes result in kernel panic.
  • Immediately after, the main LVM Thin pool metadata becomes irreparably corrupted (device not exposed, repair ineffective) or ZFS mirror pool degrades with persistent errors.
  • System only boots if data pool entries are commented out of /etc/fstab; root and swap LVs remain healthy.
  • NVMe SSDs show no errors in SMART.
  • Basic memory tests (memtester) and ECC logs show nothing abnormal.
Background

  • PBS runs as a VM on the same host; its backup target is a NFS share from the TrueNAS VM.
  • The corruption occurs under both ZFS and LVM Thin mirror configurations, always during backup or heavy I/O.
  • No signs of memory, swap, or disk exhaustion before failure.
  • Previous ZFS setup failed similarly (metadata/corruption issues during backup).
Questions

  • Is there any known hardware/platform incompatibility (MS-A2 + DDR5 SODIMM + NVMe RAID1) causing swap/storage pool corruption during intense backup jobs?
  • Has anyone seen similar swap/crash problems or thin pool/ZFS degradation on this hardware?
  • Ideas for deeper hardware diagnostics or kernel/proxmox tunables to test?
  • Any tips for reliably running ZFS or LVM Thin in this context—especially with NFS-intensive workloads on TrueNAS SCALE as a VM?

Any advice would be helpful, especially regarding RAM compatibility, NVMe model/brand, BIOS/firmware, and Proxmox or TrueNAS configuration.



Thank you in advance for your suggestions!
 

VivienM

Member
Jul 7, 2024
45
10
8
Toronto, ON
Hi everyone,
sorry for the long post. I've been helped by IA to post a better english post (I'm french) and to give max infos.

I’m running into repeat storage pool corruption issues on my Minisforum MS-A2 homelab under Proxmox, and I’d appreciate insights or hardware compatibility experiences. Here’s a summary:

Hardware

  • Host: Minisforum MS-A2
  • RAM: 32GB DDR5 SODIMM (JM4800ASE-32G, 4800MT/s, dual rank, non-ECC)
  • Storage: 2× NVMe SSDs, healthy SMART status, used in RAID1 mirror
  • Proxmox Version: pve-manager/9.0.15/6ef4690b0bee651d
  • Kernel Version: 6.14.11-4-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-4 (2025-10-10T08:04Z) x86_64 GNU/Linux
Configuration & Workloads

  • System Disk: NVMe RAID1 mirror, tested with both LVM Thin+Ext4 and ZFS (same issues seen with both)
  • Backup Schedule: Nightly PBS backup, usually starts at 1am
  • File Storage: TrueNAS SCALE VM, used as NFS server for others (datastore and PBS targets are NFS shares on this VM)
  • Disk Usage: NVMe drives are far from full
  • RAM: Never reached 100% utilization, swap almost never touched
Problem Details

  • During PBS backup at 1am, swap failures occur (swap_info_get: Bad swap offset entry) and sometimes result in kernel panic.
  • Immediately after, the main LVM Thin pool metadata becomes irreparably corrupted (device not exposed, repair ineffective) or ZFS mirror pool degrades with persistent errors.
  • System only boots if data pool entries are commented out of /etc/fstab; root and swap LVs remain healthy.
  • NVMe SSDs show no errors in SMART.
  • Basic memory tests (memtester) and ECC logs show nothing abnormal.
Background

  • PBS runs as a VM on the same host; its backup target is a NFS share from the TrueNAS VM.
  • The corruption occurs under both ZFS and LVM Thin mirror configurations, always during backup or heavy I/O.
  • No signs of memory, swap, or disk exhaustion before failure.
  • Previous ZFS setup failed similarly (metadata/corruption issues during backup).
Questions

  • Is there any known hardware/platform incompatibility (MS-A2 + DDR5 SODIMM + NVMe RAID1) causing swap/storage pool corruption during intense backup jobs?
  • Has anyone seen similar swap/crash problems or thin pool/ZFS degradation on this hardware?
  • Ideas for deeper hardware diagnostics or kernel/proxmox tunables to test?
  • Any tips for reliably running ZFS or LVM Thin in this context—especially with NFS-intensive workloads on TrueNAS SCALE as a VM?

Any advice would be helpful, especially regarding RAM compatibility, NVMe model/brand, BIOS/firmware, and Proxmox or TrueNAS configuration.



Thank you in advance for your suggestions!
Have you tried 6.14.8-2-pve instead? 6.14.11-4-pve and the 6.17s seemed to cause kernel panics or something leaving to spontaneous reboots for me on the MS-A2...
 
  • Like
Reactions: flipper203

flipper203

Member
Feb 6, 2024
58
15
8
Thanks for the feedback!
I haven’t tried 6.14.8-2-pve yet—but I am currently running 6.14.11-4-pve with pve-manager/9.0.15 on my MS-A2. I did encounter a spontaneous reboot and kernel panic during a heavy backup window with Proxmox Backup Server (VM TrueNAS + NFS sharing), which led to a corrupted LVM thin pool and required pool recreation.


Prior to that, I also had similar unrecoverable pool/metadata errors under both ZFS and LVM thin pool (on the same hardware/NVMe mirror), always during or right after intensive disk IO—especially backups via PBS.
My RAM (Transcend DDR5 SO-DIMM, non-ECC, JM4800ASE-32G) passed memtester diagnostics, disks are healthy, and BIOS is updated.


Would be interested to know if 6.14.8-2-pve has proven more stable for you, and whether the older kernel avoided those panics on the MS-A2 with similar workloads. Any other Proxmox kernel versions or tweaks you've tested that improved stability for NFS-intensive operations or backup runs?
 

VivienM

Member
Jul 7, 2024
45
10
8
Toronto, ON
Thanks for the feedback!
I haven’t tried 6.14.8-2-pve yet—but I am currently running 6.14.11-4-pve with pve-manager/9.0.15 on my MS-A2. I did encounter a spontaneous reboot and kernel panic during a heavy backup window with Proxmox Backup Server (VM TrueNAS + NFS sharing), which led to a corrupted LVM thin pool and required pool recreation.


Prior to that, I also had similar unrecoverable pool/metadata errors under both ZFS and LVM thin pool (on the same hardware/NVMe mirror), always during or right after intensive disk IO—especially backups via PBS.
My RAM (Transcend DDR5 SO-DIMM, non-ECC, JM4800ASE-32G) passed memtester diagnostics, disks are healthy, and BIOS is updated.


Would be interested to know if 6.14.8-2-pve has proven more stable for you, and whether the older kernel avoided those panics on the MS-A2 with similar workloads. Any other Proxmox kernel versions or tweaks you've tested that improved stability for NFS-intensive operations or backup runs?
So I'm not doing anything intensive, just running a couple of VMs in a home lab environment. And yet on 6.14.11-4 or the newest 6.17, it would spontaneously reboot after 8-12 hours. Your workload is much more intense...

6.14.8-2 works fine. Machine was up for 80 days before I had to turn it off due to electrical maintenance; when I switched back to 6.14.8-2 after disaster with 6.17 or 6.14.11-4, it's been up 4 days so far.
 
  • Like
Reactions: flipper203

flipper203

Member
Feb 6, 2024
58
15
8
ok interesting! I'll just run a memtest this night and then restore (I lost my data pool....) Thanks for the tip (Ram an nvme are new so it shouldn't be an issue normaly)