Minisforum N5/N5-Pro NAS Technical Discussion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
Figured i start a technical discussion about the mini nas unit. So we dont keep the great deals thread going.

Support link (so i dont have to look for it later)

pro specs
Non Pro specs


Prior benchmark links



Device list windows
 
Last edited:
  • Like
Reactions: Stovar

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
Few items noticed so far. This is on the pro model.

  • Draw around 40-50 watts
    • installed: Google corel usb, external deg1 unit with gt 720 video card (for testing), 96GB ECC Ram, 3 m.2 drives and 1 hdd, usb3 internal card.
  • Unit seems well constructed.
  • Can run as OS
    • xcp-ng 8.3
    • Windows 11
    • Proxmox 8/9
    • I skipped testing cloud OS
  • One M.2 is 4x2 the other two are 4x1 pcie speeds. a 990 pro drive hits a little below 4k on 4x2 and 1.6 on 4x1 on ssd speed tests.
  • I can pass through the deg1 nvidia card to VM.
  • So far not able to pass through the igpu to a VM (some notes on proxmox exist out on net but havent tested.)
  • Windows 11 OS, you can run AMD AI tools.
  • You can adjust the video memory of the igpu upto 48 GB in the bios. (at least on the pro with 96GB ECC RAM)
  • Trying to see if i can get AMD management console to work since the cpu seems like it would support that.

Items i like help with
  • igpu pass through in xcp-ng preferred but will also run proxmox if needed ( nothing super jankie)
  • Seeing if the AMD manamgent console can work to control the box remotely.
    • My understanding so far is it CPU can use AMD DASH AMD CPU Specs but it may need bios support.
 
Last edited:

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
xcp-ng host lspci output

Code:
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1507
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Device 1508
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1509
00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150a
00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150a
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1509
00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:02.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:02.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:02.5 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:02.6 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1509
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150b
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1509
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150c
00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150c
00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 150c
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 71)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16f8
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16f9
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16fa
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16fb
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16fc
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16fd
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16fe
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 16ff
c1:00.0 SATA controller: JMicron Technology Corp. JMB58x AHCI SATA controller
c2:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961/SM963
c3:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
c4:00.0 Ethernet controller: Aquantia Corp. AQtion AQC113 NBase-T/IEEE 802.3an Ethernet Controller [Antigua 10G] (rev 03)
c5:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8126 (rev 01)
c6:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]
c7:00.0 VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 720] (rev a1)
c7:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)
c8:00.0 USB controller: ASMedia Technology Inc. ASM2142/ASM3142 USB 3.1 Host Controller
c9:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 150e (rev d1)
c9:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt Radeon High Definition Audio Controller
c9:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 17e0
c9:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 151e
c9:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller
c9:00.7 Signal processing controller: Advanced Micro Devices, Inc. [AMD] Device 164a
ca:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 150d
ca:00.1 Signal processing controller: Advanced Micro Devices, Inc. [AMD] Device 17f0 (rev 10)
cb:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 151f
cb:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 151a
cb:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 151b
cb:00.5 USB controller: Advanced Micro Devices, Inc. [AMD] Device 151c
cb:00.6 USB controller: Advanced Micro Devices, Inc. [AMD] Device 151d
 

cmmh

Member
Feb 26, 2021
48
28
18
As a note I've been unable to get Frigate 0.16 to use the iGPU since the underlying docker image uses Bookworm, which doesn't support the graphics card, either in libva or the radeonsi driver. I haven't found a workaround yet, I'm a bit disappointed, but that's what I get for bleeding edge ? According to Nick from the Frigate team, they just upgraded to Bookworm with 0.16 so Trixie might be some time off yet.
 

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
FYI - I reached out to minis forum support about the two items (igpu passthrough and DASH in bios). Waiting to hear back on igpu pass through and they said DASH isnt implemented in bios. Seems like they wont be adding DASH either, but i suggest it would be a great feature and alternative to IPMI which many server motherboards offer.

Update: Support kind of sucks lol.. seems like i am getting different people and they don't seem to offer the same level of support when answering.
IE first people seems to want to help, second person just blows it off.
Doesnt look like they will bother with DASH support on this model but may consider it in another product. The second person completely didnt get my question about pcie pass through, so i just asked if they can send it to engineer and asked if they can take out the display/audio out of the IOMMU grouping and put them into their own group.

Last update: Support is useless. They dont want to do anything. If anyone has access to engineer there who works on bios that might be a way to ask about the IOMMU grouping for the igpu.
 
Last edited:

cmmh

Member
Feb 26, 2021
48
28
18
I have been able to get Ollama working with ROCm on this unit. You'll have to use the HSA_OVERRIDE_GFX_VERSION=11.0.0 since the GPU is technically unsupported by ROCm. I'm not ready to do any benchmarking yet, but its easily 10x faster using the GPU that the CPU only for the same tasks. I have done zero optimization of the system. Depending on the model, I'm getting at least 10 tokens a second.

I should say that I also have Jellyfin, Frigate, and Immich all running and sharing the GPU resources through LXC containers. Frigate CPU usage went way down compared to my previous 11th gen Intel box that was running it.
 

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
I have been able to get Ollama working with ROCm on this unit. You'll have to use the HSA_OVERRIDE_GFX_VERSION=11.0.0 since the GPU is technically unsupported by ROCm. I'm not ready to do any benchmarking yet, but its easily 10x faster using the GPU that the CPU only for the same tasks. I have done zero optimization of the system. Depending on the model, I'm getting at least 10 tokens a second.

I should say that I also have Jellyfin, Frigate, and Immich all running and sharing the GPU resources through LXC containers. Frigate CPU usage went way down compared to my previous 11th gen Intel box that was running it.
Can you post details on how you got this working? If you have time, steps as well?
 

WANg

Well-Known Member
Jun 10, 2018
1,451
1,091
113
47
New York, NY
Few items noticed so far. This is on the pro model.

  • Draw around 40-50 watts
    • installed: Google corel usb, external deg1 unit with gt 720 video card (for testing), 96GB ECC Ram, 3 m.2 drives and 1 hdd, usb3 internal card.
  • Unit seems well constructed.
  • Can run as OS
    • xcp-ng 8.3
    • Windows 11
    • Proxmox 8/9
    • I skipped testing cloud OS
  • One M.2 is 4x2 the other two are 4x1 pcie speeds. a 990 pro drive hits a little below 4k on 4x2 and 1.6 on 4x1 on ssd speed tests.
  • I can pass through the deg1 nvidia card to VM.
  • So far not able to pass through the igpu to a VM (some notes on proxmox exist out on net but havent tested.)
  • Windows 11 OS, you can run AMD AI tools.
  • You can adjust the video memory of the igpu upto 48 GB in the bios. (at least on the pro with 96GB ECC RAM)
  • Trying to see if i can get AMD management console to work since the cpu seems like it would support that.

Items i like help with
  • igpu pass through in xcp-ng preferred but will also run proxmox if needed ( nothing super jankie)
  • Seeing if the AMD manamgent console can work to control the box remotely.
    • My understanding so far is it CPU can use AMD DASH AMD CPU Specs but it may need bios support.
Don’t bother with DASH - considering that I ran 3 generations of hardware that has AMD Pro support (HP t730/740/755 Elite thin clients and the RX427BB/V1756/V2546 APUs...which is the embedded version of the FX7600p/Ryzen 5 Pro 2650H/Ryzen 5 Pro 4650H respectively)...and I can’t get anything sane out of it, I think DASH is pretty much one of those half-baked AMD things.

Correction: it’ll probably work if you run Windows with some orchestration/endpoint management suite (like SCCM or Bigfix), but it’s not at all OS agnostic or easy to get working like what was theoretically promised. AMD will need to add DASH target support into Linux or BSD, and I don't see that being a big priority for them.

AMD is fairly hostile (or at least not entirely supportive) about GPU virtualization or offloading on their iGPUs - at least when it came to their Vega based silicon (anything before the Rembrandt APUs) which meant that it doesn’t work on the t740/755s. There are hobbyist writeups about getting it working on the later Rembrandt (6x0m, RDNA2 based)/Phoenix/Hawk Point (7x0m, RDNA3 based). For Strix (8x0m, RDNA3.5 based) it should be somewhat similar.

 
Last edited:
  • Like
Reactions: marcoi

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
If anyone looking for the npu to test pass through, it not labled in the lspci. I used windows to find device id. For my system (not sure all are the same) it ended up as:
ca:00.1 Signal processing controller: Advanced Micro Devices, Inc. [AMD] Device 17f0 (rev 10)


1758889179684.png

For XCP-NG i had to manually add it to a VM as it not showing as a device to turn on pass through. It still not working in windows VM.
 

WANg

Well-Known Member
Jun 10, 2018
1,451
1,091
113
47
New York, NY
If anyone looking for the npu to test pass through, it not labled in the lspci. I used windows to find device id. For my system (not sure all are the same) it ended up as:
ca:00.1 Signal processing controller: Advanced Micro Devices, Inc. [AMD] Device 17f0 (rev 10)


View attachment 45606

For XCP-NG i had to manually add it to a VM as it not showing as a device to turn on pass through. It still not working in windows VM.
Hmmm....did you pass through the iGPU as well, or just the device attached to ca:00:1? You'll probably need to install the AMD Radeon drivers and then install ROCm, and even then the official support for is barely there, the use case is kinda useless as it's only for PyTorch (the NPU only pushes 40-50 TOPS max?). You are better off just waiting for gfx1151 (the Radeon 890M iGPU) to have support for more use cases added in ROCm later on. If it works now, it's probably some bleeding edge usecase that'll break on a kernel/driver update.
 

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
Hmmm....did you pass through the iGPU as well, or just the device attached to ca:00:1? You'll probably need to install the AMD Radeon drivers and then install ROCm, and even then the official support for is barely there, the use case is kinda useless as it's only for PyTorch (the NPU only pushes 40-50 TOPS max?). You are better off just waiting for gfx1151 (the Radeon 890M iGPU) to have support for more use cases added in ROCm later on. If it works now, it's probably some bleeding edge usecase that'll break on a kernel/driver update.
I did pass both igpu and npu to windows VM, both show up with error 43 after driver install.
 

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
So i been doing more testing on the unit and decided on two outcomes.
Option 1 - Keep old HTPC setup as is, sell the unit.
Option 2 - Install windows 11 on the unit and use dockers desktop with WSL setup in windows.

Windows 11 allows all devices to work and with wsl/dockers i can run frigate, etc. I haven't played with this setup enough to know if that is a long term option or not.

I'll keep updating as i have time to play with the setup.
 

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
i got truenas scale installed.

I see two gpu and selected amd as isolated gpu in advance.I added it to w11 VM and it fails with below error.

1759809549240.png

Code:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/supervisor/supervisor.py", line 191, in start
    if self.domain.create() < 0:
       ^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/libvirt.py", line 1373, in create
    raise libvirtError('virDomainCreate() failed')
libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor: 2025-10-07T03:57:44.162401Z qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:c9:00.7","id":"hostdev5","bus":"pci.0","addr":"0xc"}: vfio 0000:c9:00.7: failed to setup container for group 32: Failed to set group container: Invalid argument

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/ws_handler/rpc.py", line 323, in process_method_call
    result = await method.call(app, params)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/server/method.py", line 52, in call
    result = await self.middleware.call_with_audit(self.name, self.serviceobj, methodobj, params, app)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 911, in call_with_audit
    result = await self._call(method, serviceobj, methodobj, params, app=app,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 720, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/api/base/decorator.py", line 93, in wrapped
    result = await func(*args)
             ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_lifecycle.py", line 57, in start
    await self.middleware.run_in_thread(self._start, vm['name'])
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 627, in run_in_thread
    return await self.run_in_executor(io_thread_pool_executor, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 624, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_supervisor.py", line 68, in _start
    self.vms[vm_name].start(vm_data=self._vm_from_name(vm_name))
  File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/supervisor/supervisor.py", line 201, in start
    raise CallError('\n'.join(errors))
middlewared.service_exception.CallError: [EFAULT] internal error: qemu unexpectedly closed the monitor: 2025-10-07T03:57:44.162401Z qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:c9:00.7","id":"hostdev5","bus":"pci.0","addr":"0xc"}: vfio 0000:c9:00.7: failed to setup container for group 32: Failed to set group container: Invalid argument

Looks like it added a bunch of pcie devices.
1759809637970.png

Goign to see what it added and remove incorrect ones.

update - removed all but the amd gpu, audio and npu. still not working in passthrough
1759811226778.png
 
Last edited:

SlowmoDK

Active Member
Oct 4, 2023
254
143
43
I see two gpu and selected amd as isolated gpu in advance.I added it to w11 VM and it fails with below error.
U need some ACS override support from bios or script to separate GPU into its own iommu group, but that comes with it's own set of problems
 

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
yeah seems like the passthrough of igpu/npu isnt something that will happen easily. So im back to using windows as a base and seeing if docker/vm will work for my use case, if not im going to sell the unit.
 

marcoi

Well-Known Member
Apr 6, 2013
1,651
377
83
Gotha Florida
PS - I tested the 2080 ti gpu pass though via deg1 dock is working in truenas to windows vm. but that worked on all the other platforms as well.
 

SlowmoDK

Active Member
Oct 4, 2023
254
143
43
yeah seems like the passthrough of igpu/npu isnt something that will happen easily. So im back to using windows as a base and seeing if docker/vm will work for my use case, if not im going to sell the unit.
Have you tried Proxmox with pcie_acs_override=downstream enabled ?

ACS override patch is built-in in the proxmox kernel