Topton 'NAS' motherboard.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Camprr23

Member
Nov 20, 2019
46
21
8
Just got myself the N6005 powered NAS motherboard (https://nl.aliexpress.com/item/1005004761498719.html) amongst other resellers. Costs around $300 (less with rebates)

A couple of quick conclusions after installing Debian:

1. 64Gbytes is NOT supported, 'only' 32Gbytes, yes, I have tested this with actuall DIMMS, not just looked at the processor specs. (Edit - 64Gbytes does work)
2. The SATA ports are 'weird', one is connected to PCIe (Intel), the other 5 use a JMB585 controller, so throughput is limited, especially in RAID sets. The Jmicron JMB58x controller is also 'limited' in bandwidth to 1Gbyte/sec: LnkSta: Speed 8GT/s (ok), Width x1 (downgraded) The actual chip supports x2 PCIe lanes: PCIe Gen3 x2 to x5 SATA 6Gb/s
3. The heatsink is insufficient to keep all cores running at 3.3Ghz 100% of the time, it 'settles' to around 2.4Ghz for all cores (or is this part of the p-state settings and can be tuned?)
4. The 24+4 pin ATX connector, you have to power the +4pin with 12V, otherwise it will not start up.
5. Of the 6 advertised USB ports, 2x are 2.0 (internal to the motherboard for boot-USB sticks) 2x are 3.0 (external connector) and 2x are pin-headers, no header->port cable included.
6. ATX port-plate not included.
7. The m.2 NVME ports are PCIe 3.0 x1, so only ~1Gbyte/sec.
8. No problems so far on the Intel 226 chips, been running them at 1Gbit, not 2.5Gbit though (no 2.5Gbit switch yet).
9. With 32Gbytes and a 970EVOPlus 2Tbyte and a PicoPSU 120, I am running around 9-10W in idle. Running ~25W in full CPU load (4 cores). Measured on my lab power supply.
10.The CPU does not support i915 'interface' so possibly problematic with Plex for re-encoding.

So theoretical storage throughput is 'only' 1Gbyte/sec for each of the NVME drives, 600Mbyte/sec for one SATA port and 1Gbyte/sec for the other 5 SATA ports. On the network side we have 4x2.5Gbit ethernet, so also around 1Gbyte/sec.
 
Last edited:

abufrejoval

Member
Sep 1, 2022
39
11
8
64GB must be a BIOS issue then, because I've tested 64GB (2 DDR4-3200 modules, which run at DDR4-2933 on Jasper Lake) on my Intel NUC11ATK with the same SoC.

But it took almost an eternity to boot after switching from 2x8GB DDR4-2400: I was ready to turn it off, thinking it had failed, when I got distracted--and then noticed it coming to life.

PCIe/SATA lanes are severly constrained on Atoms: only 2 SATA ports were ever natively available and 6 PCIe planes had to suffice before Jasper Lake. Goldmonts couldn't do 6x1 allocations and they also were PCIe 2.0. Jasper Lake upgraded them to 8x PCIe 3.0 and evidently with more flexibility on the lane allocations.

4 lanes are needed to support the four i225 NICs, which only leaves four lanes to go around for everything else.

The JMB585 takes 2 PCIe 3.0 lanes for almost 2GB/s theoretical bandwidth, but with that it might actually be able to manage a RAID5 HDD RAID at near 500MB/s bandwidth or deliver a half-way decent SATA-SSD JBOD, while a single lane would cripple it too much. But if you do a RAID, it probably wouldn't get better if it were across the onboard and the JMB connected drives, which why leaving the 2nd SoC port alone (or letting it connect to one of the M.2 sockets) may have been the better choice.

For the NVMe slots, I'd hope that it at least uses 2 lanes with only a single M.2 drive mounted (or the 2nd being SATA): with two present there isn't much of a choice but to split them.

It would be (academically) interesting to bench a five drive RAID5 only on the JMB vs. a six drive RAID5 covering the onboard SATA, but most would probably stick with using the onboard SATA for an SSD.

The N6005 is no speed daemon, but surprisingly spunky at running a 4k desktop or home theatre.

The iGPU isn't recognized as an i915 variant e.g. by Alma/Oracle/Rocky EL8 because its PCI-ID is unknown. But it is actually compatible and there is a boot override switch to tell the i915 driver to try use it (it even mentions how to activate it during boot).

Well, it could have been Mandriva or Fedora... (I threw quite a few Linux variants at the machine before settling it with Windows 10 IoT in the end)

It works just fine then and again much more responsive at 4k than its J5005 predecessor, thanks to twice as many EUs and better use of dual
channel memory (~20GB/s measured).

I haven't tried video en/re-coding on Linux, but 15GB 4k HVEC movies run flawlessly with VLC at pretty near idle CPU.

The worst thing about Jasper Lake is that now we all want their Gracemont brethren without those pesty P-cores with no upgrade on the $$$!
 
  • Like
Reactions: BoredSysadmin

RolloZ170

Well-Known Member
Apr 24, 2016
6,339
1,934
113
3. The heatsink is insufficient to keep all cores running at 3.3Ghz 100% of the time, it 'settles' to around 2.4Ghz for all cores (or is this part of the p-state settings and can be tuned?)
3,3Ghz is burst, not continious all core turbo....
the N6005 should do 3,0Ghz all core.
1. 64Gbytes is NOT supported, 'only' 32Gbytes, yes, I have tested this with actuall DIMMS, not just looked at the processor specs.
BIOS or
maybe the board lacks an address line to the DIMM slots.
 
  • Like
Reactions: Bert

Camprr23

Member
Nov 20, 2019
46
21
8
64GB must be a BIOS issue then, because I've tested 64GB (2 DDR4-3200 modules, which run at DDR4-2933 on Jasper Lake) on my Intel NUC11ATK with the same SoC.

4 lanes are needed to support the four i225 NICs, which only leaves four lanes to go around for everything else.

The JMB585 takes 2 PCIe 3.0 lanes for almost 2GB/s theoretical bandwidth, but with that it might actually be able to manage a RAID5 HDD RAID at near 500MB/s bandwidth or deliver a half-way decent SATA-SSD JBOD, while a single lane would cripple it too much. But if you do a RAID, it probably wouldn't get better if it were across the onboard and the JMB connected drives, which why leaving the 2nd SoC port alone (or letting it connect to one of the M.2 sockets) may have been the better choice.

For the NVMe slots, I'd hope that it at least uses 2 lanes with only a single M.2 drive mounted (or the 2nd being SATA): with two present there isn't much of a choice but to split them.

The iGPU isn't recognized as an i915 variant e.g. by Alma/Oracle/Rocky EL8 because its PCI-ID is unknown. But it is actually compatible and there is a boot override switch to tell the i915 driver to try use it (it even mentions how to activate it during boot).
Some small things of note here:
1. The network chips are Actually i226, so hopefully no more woeful UDP communications, and possibly slightly more power-efficient than i225
2. The NVME ports are really 'only' PCIe 3.0 x1 lanes. lspci on my box confirms it: LnkSta: Speed 8GT/s (ok), Width x1 (downgraded) Even when I have just installed 1! (in the first slot).
3. The JMB585 has exactly the same message, 'just' 1 lane available: LnkSta: Speed 8GT/s (ok), Width x1 (downgraded)
lspci shows the following config in terms of lanes:
Code:
00:00.0 Host bridge: Intel Corporation Device 4e28
00:02.0 VGA compatible controller: Intel Corporation Device 4e71 (rev 01)
00:04.0 Signal processing controller: Intel Corporation Device 4e03
00:08.0 System peripheral: Intel Corporation Device 4e11
00:14.0 USB controller: Intel Corporation Device 4ded (rev 01)
00:14.2 RAM memory: Intel Corporation Device 4def (rev 01)
00:15.0 Serial bus controller [0c80]: Intel Corporation Device 4de8 (rev 01)
00:15.2 Serial bus controller [0c80]: Intel Corporation Device 4dea (rev 01)
00:16.0 Communication controller: Intel Corporation Device 4de0 (rev 01)
00:17.0 SATA controller: Intel Corporation Device 4dd3 (rev 01)
00:19.0 Serial bus controller [0c80]: Intel Corporation Device 4dc5 (rev 01)
00:19.1 Serial bus controller [0c80]: Intel Corporation Device 4dc6 (rev 01)
00:1c.0 PCI bridge: Intel Corporation Device 4db8 (rev 01)
00:1c.1 PCI bridge: Intel Corporation Device 4db9 (rev 01)
00:1c.4 PCI bridge: Intel Corporation Device 4dbc (rev 01)
00:1c.5 PCI bridge: Intel Corporation Device 4dbd (rev 01)
00:1c.6 PCI bridge: Intel Corporation Device 4dbe (rev 01)
00:1c.7 PCI bridge: Intel Corporation Device 4dbf (rev 01)
00:1e.0 Communication controller: Intel Corporation Device 4da8 (rev 01)
00:1e.3 Serial bus controller [0c80]: Intel Corporation Device 4dab (rev 01)
00:1f.0 ISA bridge: Intel Corporation Device 4d87 (rev 01)
00:1f.3 Audio device: Intel Corporation Device 4dc8 (rev 01)
00:1f.4 SMBus: Intel Corporation Device 4da3 (rev 01)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Device 4da4 (rev 01)
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
02:00.0 SATA controller: JMicron Technology Corp. JMB58x AHCI SATA controller
03:00.0 Ethernet controller: Intel Corporation Device 125c (rev 04)
04:00.0 Ethernet controller: Intel Corporation Device 125c (rev 04)
05:00.0 Ethernet controller: Intel Corporation Device 125c (rev 04)
06:00.0 Ethernet controller: Intel Corporation Device 125c (rev 04)
But otherwise not the most powerful board out there, but running at 12W now with 4x2.5" SSDs connected, or around 9.3W without the drives connected.

Thanks for the tip on the i915, already have it working now with the module options.
 
  • Like
Reactions: Aluminat

Yuki Iwatani

New Member
Nov 7, 2022
7
6
3
According to manufacturer's webpage (it's manufactured by Changwang like many similar small PCs reviewed and discussed on this site, e.g. Topton Jasper Lake Quad i225V Mini PC Report) the board does support 64GByte RAM: N5105-N6005-6SATA存储服务器NAS主板, although no further elaboration is given on what sticks are compatible. Here's product booklet from their "downloads/web directory" section, also not much useful: 畅网N5105-N6005-6SATA主板NAS主板规格书(CW-N5105-NAS).pdf

After skimming through Jasper Lake boards on Aliexpress I found what seems to be a previous iteration of this design, and it explains such unusual amount of Ethernet ports for a NAS, the board in question has green solder mask instead of black, sports older i225 network controllers and no N6005 CPU option, and is called MW-NVR-N5105 ver.1.0. Now it all makes sense, all these ports are for connecting network surveillance cameras. It's really sad that we're left with such gimmicks that don't even offer full speed on all SATA ports, but could this thing sustain at least 4 drives at their average speeds around 250MB/s? I'd like to see some benchmarks.

Double NVMe drive layout looks similar to QNAP TS-464C design, except the latter one claims 2 x2 NVMe instead of 2 x1, not sure in what universe subtracting 5Gigabits from two network cards equals free bandwidth of 32Gigabits for two NVME drives ╮(︶▽︶)╭

Overall this is a terrible (not as terrible as earlier 12 port 1 DIMM N5095 designs) base for low-power NAS, even something like Odroid H3 sounds better. This board doesn't have a 10Gbit USB for external SSD backups, nor full speed x4 NVME for OS and cache drive (who the heck needs mirrored SSDs at x1 speed?), factory saves 1 dollar on jet turbines instead of proper passive alu heatsink, and wait what, is that a TPM header, 10 pins, what boards does it accept?

P.S: Does this board support CPU downvolting in BIOS setup menu?
 

Attachments

Last edited:
  • Like
Reactions: abq

Camprr23

Member
Nov 20, 2019
46
21
8
The use of the N6005 (jasper lake) severely limits the number of PCIe ports, and they've basically put too much into the motherboard. The JMB585 and the two NVME interfaces are all only a PCIe x1 link to keep the number of PCIe ports to 7. This limits 5 of the SATA drives to a total of ~1Gbytes/sec. So ~200mbytes/drive but with higher peaks possible when all are not used at the same time. I'll do some tests, but I currently don't own 5/6 drives capable of 500+ Mbytes/sec (in the end it only took 4 'cheap' drives).

But let's be honest, 4 drives are capable of generating ~250Mbytes/sec to saturate the network ports or you could use the NVME as cache. So yes, they are limited, but not in such a way that it would cripple the board (hmm, not quite so, after some research/testing). The alternative for me was the Zimaboard, it only has 2x SATA ports and only has eMMC storage and 8gbytes of (soldered) memory severely limiting it's speed and usability.

I also have a XEON e5, that is capable of saturating multiple 10gbit links, but I have nothing to generate or consume this amount of data and do something meaningful with it (it is a homelab, after all). And it uses upwards of 100W at idle (incl the storage controller and the 10Gbit interfaces, SFP+ is very power inefficient).

I am also trying to reduce the number of devices in the house, I don't want a separate NAS and a seperate router, I realise this can be dangerous in terms of security, but some good firewall rules should make the difference here.

So for my uses, this does fine and I am aware of the limitations and can work with their limits.

Also, who uses TPM for home-labs? It gets in the way more than it is useful.

This board has it's niche, and it fills my needs as a basic NAS/router. Quite happy they were willing to release this as a seperate board. The only thing I regret is that they did not include a ATX port plate so you could mount this in a normal case.

The board does support changing the minimum voltage, as shown by the screenshots.

I have also done some tests with the four sata drives I have, they are each capable of 300Mbyte/sec+ They are throttled to ~180Mbytes/sec (so ~750Mbyte/sec combined speed) on the shared SATA port.
cat speedtest.sh
dd if=/dev/sda of=/dev/null bs=4k count=2000000 &
dd if=/dev/sdb of=/dev/null bs=4k count=2000000 &
dd if=/dev/sdc of=/dev/null bs=4k count=2000000 &
dd if=/dev/sdd of=/dev/null bs=4k count=2000000 &

root@NAS:~# ./speedtest.sh
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB, 7.6 GiB) copied, 43.2076 s, 190 MB/s
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB, 7.6 GiB) copied, 43.2351 s, 189 MB/s
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB, 7.6 GiB) copied, 43.7529 s, 187 MB/s
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB, 7.6 GiB) copied, 45.5448 s, 180 MB/s


Also, as expected, a single drive is able to do 'more' than the ~180Mbytes/sec:


It kind of matches the performance I get from the NVME drive with a x1 link. It's a 970EvoPlus 2Tbyte drive, so capable, for sure. It peaks out at ~780Mbytes/sec
dd if=/dev/nvme0n1p3 of=/dev/null bs=4k count=20000000
20000000+0 records in
20000000+0 records out
81920000000 bytes (82 GB, 76 GiB) copied, 106.178 s, 772 MB/s

Oh, and I forgot to mention, I have set the fan to come on at 36 degrees and go off at 34 degrees. It's off most of the time (pseudo-passive) ;-)

Undervolting is possible, but I did not try it: 1667895444816.png
1667895459874.png
 
Last edited:

elvis_saya

New Member
Oct 21, 2022
5
3
3
Thanks for opening a thread for this board and sharing your findings. My board (N5105) arrived today and I'm trying it out with 32GB DDR4 and Linux Mint Live USB.

1. I was disappointed with the JMB585 LnkSta of x1 width (as per lspci -vv command). The LnkCap states Width x2 so I initially thought it might be in power-saving mode as I did not connect any HDD to it. But in your last post it seems you have tested 4 SATA links and it never exceeds 1000 MB/s.

2. Mine came with the ATX case back panel plus 6 screws, see pic below. It was hidden at the back of the paddings inside the box.

3. I'm seeing 52 degrees C when in idle (Mint live USB desktop). Trying out Prime95 I see a max of 87 degrees. I will probably re-paste the heatsink with Arctic MX4.

4. I used 2 HP-branded Hynix 16GB 3200 DDR4 SODIMMs. BIOS reports speed of 2933 GT/s which is the max for this SoC.

This may not be the ideal board but for my purpose (low power ESXi host with OPNSense, "NAS" VM with HDD mergerfs/snapraid, and media server VM for 2 users) it's more than enough.

1.png
 
  • Like
Reactions: ktingle

Camprr23

Member
Nov 20, 2019
46
21
8
Thanks for that, I rechecked my box, no such luck, no backplate, no screws.

Yeah the TL: DR of the last post was:
With the limited PCIe busses on the N6005 (8x) they literally could not add more for the storage:
2x total for the 2xNVME slots
1x for the on-board SATA
4x for each of the network adapters (although here it should be possible to have 2 controllers per PCIe lane, to free up more for the storage)
1x for the JMB585 SATA controller (5 SATA ports).
The total adds up to 7, which leaves one for the USB and the other 'gubbins'.

This may not be the ideal board but for my purpose (low power ESXi host with OPNSense, "NAS" VM with HDD mergerfs/snapraid, and media server VM for 2 users) it's more than enough.
Indeed, this thing has a 'niche' which is: low power CPU with lots of storage options, although not running at 100% throughput, good enough for a NAS.
 
Last edited:

Yuki Iwatani

New Member
Nov 7, 2022
7
6
3
I found confirmations to most of my questions in CPU datasheets [Jasper Lake EDS Vol1]:
Flexible IO.png
SATA, PCIe and USB 3.2 x2 (10GBit kind) are multiplexed and board designers have to choose between few. They could have used x2 lanes for JMB585 but decided to keep one SATA port connected natively. Each network chip is connected to full x1 PCIe lane without any switches, wasting lots of potential bandwidth (but probably saving on latency and heat), 2 NVMe drives are on x1 lanes. What's interesting is that there are completely independent additional ports/lanes for HS400 400MB/s eMMC and UHS-I 100MB/s SD card both of which can serve as boot drive. Overall, like Odroid's design more, it looks perfect for both small-form NAS and micro-PC without bulky inheritance from NVRs or soft-routers. It's sad we don't have much diversity in low-power server hardware these days.
h3_blockdiagram.png
@Camprr23 thank you for showing Setup menu screens, now we'll have to decipher what these options actually do lol. You did not specify what 64GB memory kit you tested and failed. [Odroid wiki] page and shop lists Samsung M471A4G43AB1-CWE 32GB sticks as compatible. It also says the following:
The first boot needs a long post process to start the BIOS due to a long period of checking the RAM timing parameters.
Once the configuration is stored into the backup memory in the SoC, it boots quickly.
Can someone take photos of board backside showing heatsink mounting screws? Or measure distances between their centers.
 

Camprr23

Member
Nov 20, 2019
46
21
8
The 64Gbyte memory I used was 2x Kingston KCP426SD8/32, which I have now realised is 'only' 2.666MT/s , so it is too slow for this SoC. Doh! Apologies for the confusion.

I don't like the odroid, not enough SATA, not enough Network, though it does use the extremely efficient RTL network chipset.

The holes are 90mm x 40mm, as close as I can tell. The heatsink itself is 98mm x 57mm
IMG_20221108_173127217_HDR (1).jpgIMG_20221108_173136065_HDR (1).jpg
 
Last edited:

Yuki Iwatani

New Member
Nov 7, 2022
7
6
3
After eyeballing low-res pics in various TS-464 reviews (both export version with PCIe x2 slot for 10Gbit network card and Chinese internal market model without one) I can reliably say it uses some sort of Asmedia SATA controller, perhaps the ASM1166. It also seems that same backplane with this controller is shared across model generations without changes.
61d96bd35cf572839.jpg_e1080.jpgQNAP-TS-464-NAS-Review-Hardware-10-Medium.jpgASM1166.jpg
TS-464 reportedly has similarly arranged 2 x1 speed NVMe slots, and additionally — 2 red colored USB-A 3.2 10Gbit ports, which by the way do not steal HSIO lanes from PCIe devices according to Flexible I/O table. Thus hypothetically this board could have:
— 2*10 Gbit USB ports
2*2,5Gbit network controllers on x1 PCIe link each
1*NVMe drive at x4 PCIe (or 2 drives at x2 speed or 1 at x2 speed and x2 lane free for something else, a 10Gbit network card or SATA controller)
1*4-port SATA controller at full x2 PCIe
— 1 eMMC
— 1 SD card slot

The holes are 90mm x 40mm, as close as I can tell.
I suspected that heatsink sits too tight between voltage regulator caps and dimm sockets, but being a flat slab on the bottom, it could be replaced with something beefier easily (don't lose the transparent plastic spacers!).

P.S.: some very clear pictures showing internal layout of 4-bay QNAP TS-451 from 2014 in TechPowerUp review
in_SATA_board.jpg
 
Last edited:

Camprr23

Member
Nov 20, 2019
46
21
8
I suspected that heatsink sits too tight between voltage regulator caps and dimm sockets, but being a flat slab on the bottom, it could be replaced with something beefier easily (don't lose the transparent plastic spacers!).
Indeed, the plastic spacers are a requirement to get everything nicely spaced out without 'bending' the PCB.
There is a _lot_ of horizontal space as soon as you go slightly more vertical:
image.jpg
The space above the NVME and the dimms could be used to create a bit more surface area for a passive cooler.
But to be honest, the fan makes so little noise, probably not worth it. Also, when idling, you can set it to stop in the bios under a certain temperature. I've set it to 34/36 (off under 36, on above 34) and it seems to do just fine, off. While the system is idling anyways.
 

Yuki Iwatani

New Member
Nov 7, 2022
7
6
3
I see you placed the SSD in slot right on top of network chips for even crisp baking ( ・ω・)⊃―{}@{}@{}-
Here's a screenshot from a youtube video with simple heatsink glued instead of fan, imagine having same height fins but full surface:
heatsink mod.png
 

vamega

Member
Nov 8, 2022
48
11
8
Wouldn't the purpose of having a better heatsink/cooler be to allow the CPU to run at it's full potential if necessary. What I understood was that the default heatsink results in thermal throttling.

This board is really exciting as a NAS/Plex server for me. The Odroid H3+ would be very interesting as well, I don't need 4 2.5gbe ports, 2 would be more than enough. Bit of a waste to have all those PCIE lanes going to the ethernet. Seems like they'd only need 2 lanes for the ethernet, leaving freeing up enough to have the JMB585 to use the two lanes it could use, and have one of the M2 slots have two lanes.

Having something more akin to the Odroid, but in a Mini-ITX form factor would be ideal for me.
 

elvis_saya

New Member
Oct 21, 2022
5
3
3
I found confirmations to most of my questions in CPU datasheets [Jasper Lake EDS Vol1]:
View attachment 25228
SATA, PCIe and USB 3.2 x2 (10GBit kind) are multiplexed and board designers have to choose between few. They could have used x2 lanes for JMB585 but decided to keep one SATA port connected natively.
Thanks for this. If you dig a bit deeper in the BIOS, in the PCIe settings/details, you can see that one PCIe lane out of 8 is read only (greyed out) and is reserved for the SATA.

4x lanes for four i226
2x lanes for 2 NVMe x1
1x lane JMB585
1x set for native SATA

It's probably not possible to share a pcie lane between two i226s as each dicrete chip is already set for x1.

I also found out that default BIOS settings have c-states turned off. Enabled this and saw power consumption go down 5 watts in idle from 20W to 15W (using killawatt, with 2 sticks DDR4 + 1 stick nvme + 500W cheap and inefficient PSU)

There is too much thermal paste in my unit and the heatsink screws are not tightened properly. Aside from the plastic spacers there are no support - the heatsink screw holes are spaced quite far apart and we must take care not to bend the board from over tightening.

1667925746517.jpg

I have done a DIY heatsink using thermal adhesive for a N5100 laptop, but for this I'll use the stock one first becase the heatsink fan is dead silent as @Camprr23 has mentioned. The fan though has only 3 pins so is not PWM capable.

The i226 chips beneath the nvme slot is a concern, though there is some clearance. Since I'll be running these in 1Gb/s I'm hoping that these won't get too warm.

1667907086461.jpg
 
Last edited:

Camprr23

Member
Nov 20, 2019
46
21
8
Why would you replace the heatsink with a target of performance? For passive cooling, I could understand. My whole point is to keep power consumption as low as possible while keeping RAM count and interfaces plentiful (and throughput 'enough'), most of my 'homelab/mediabox' loads are relatively low for a box like this (even though I have symmetric 1Gbit Internet). Power costs here in the netherlands are quite high, so if you want to keep a 10W load running for 24/7/365 is around $80/year. For a 100W load that rises to $800/year, just to give you an idea of the motivation for this. My old 'NAS' had a 40W (idle) CPU, storage controllers, spinning disks, 10Gbit ethernet (multiple ports) etc etc. I was running ~200W on idle. So even though the hardware is not 'cheap', I still make the money back in about a year of use (that's how I sell it to the wife, it's an investment).

Thanks for the hint on c-states. I started playing with the different schedulers in Linux to stop the processor from boosting too often.
Also trying to get the i915 driver to work, but the developers renamed the setting again (god knows why).
Even using c-states C0/C1 as the lower limit (set in the BIOS), I was unable to get power consumption below 11.7-12.2W, I'm using a PicoPSU and a 90W power-brick, so I should be able to get significantly lower, judging by your figures.
Experimentation continues.

On the matter of throughput, even if you run the SATA ports in RAID-5, you will still have enough throughput from the HDs/SSDs to saturate a 2.5Gbit network port, so on the whole matter of the PCIe 'underporting' is kind of mute. Yeah if you want to fill 2 or 3 ethernet ports with the max throughput at any time, you might want to be looking at something else than a N6005 anyway.

Good point on moving the NVME drive to the cooler slot. Not only is it not above the network controllers, but it will also be slightly cooled by the CPU fan, as the cpu fins are open on the NVME side.
 
Last edited:

vamega

Member
Nov 8, 2022
48
11
8
Wow, that's very expensive power. I thought NYC was bad at 0.25c per kWH.
I don't run much on my home server either, but my main interest in this is for the iGPU.

Does anyone know if this board supports staggered disk spin up for the sata boards?
 

Camprr23

Member
Nov 20, 2019
46
21
8
img_thermal_1668155298083.jpg
I'll move the NVME SSD from above the network-ports and get another picture, but the board is pretty warm there.
 

Camprr23

Member
Nov 20, 2019
46
21
8
About the staggered spinup, I'll have a look in the BIOS. There were a staggering amount of options in the BIOS for SATA...
 

Yuki Iwatani

New Member
Nov 7, 2022
7
6
3
I'll move the NVME SSD from above the network-ports and get another picture
Considering the area around SATA controller's heatsink is nearing 50's, I'm not sure if that's even gonna improve anything. But you should definitely add a heatsink to the NVME drive controller, just to be safe and see if that's gonna improve read-write speeds a bit, there's still 60MByte/s headroom to the netto bandwidth limit from your measured 780.