Thoughts on a low power NAS and VM server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tinker

New Member
May 15, 2021
7
1
3
I've been repurposing enterprise grade servers for years, first one was a based on an i386, various Pentium 3/4, most recently Dell R710 with dual Xeon, redundant psu, loads of RAM etc.

The Dell R710 idles about 70W with 5 x HDD, 10GBe and a PERC card. It goes up a lot more when I get it to do more stuff, but 98% of the time it's idle or doing very little. I don't mind it ramping up when required, but idle much lower.

Being in the UK, electricity costs about 43c USD per kWh. You can see how that would scale up for something running 24/7/365

The issue is that they're power hungry, noisy and take up a lot of space.
But they are cheap to buy, have loads of space and reliable.

I'll keep the firewall separate, that's a N5105 with a quad 2.5Gbe which works well for my fibre connection. That is usually around the 7W mark for the whole box. My home network uses a 10GBe backbone.

I'm thinking of either proxmox running TrueNas or Unraid on it's own, I can make use of Unraid's containers/VM functionality.
I'd like to run Pihole, Home Assistant and a streaming server for music and UHD
I'ii have approx 5 x 18GB HDD, I've got plenty of NVMe SSD for caching

Option 1
N5105 soft router type mb with 6 x SATA, 4 x i226, there are some with 10G SFP+, 4 x i226 (2.5Gbe), a few with gen 9 i5/i7.
Reading the threads on here sounds like these N5105 are quite constrained by the PCIE IO, though with spinning HDD, it may not matter. Some only have the one SATA port, but 3 or 4 USB 3 ports, some even have gen1/2 ports

Option 2
Ryzen 5600H mini PC, much more powerful CPU, plenty of RAM, plenty of bandwidth 2.5GBe, 4 x USB3 gen1/2
Use USB3 HDD instead of SATA 3
Option for 64GB ram and NVMe with PCIE3
Still low power when idling

Option 3
Not sure??

What I really want
Ideally I can get a mini ITX board with a low power i5/i7 with plenty of E cores or similar Ryzen CPU, minimum of 4 cores going uptown 12 cores, with at least dual channel DDR4 (DDR5 would be better), idle of no more than 8W and ramp up to 65W if needed, with the option to limit to 20,30,40W in the bios
8 x SATA 3 at full speed
2 x i226 or better still 2 x 10GBe copper or at least SFP+ with sufficient PCIE bandwidth
4 x USB4 or at least 4 x USB3 gen 2
1 x DP
1 x regular 19V input or USB4 input

What are your thoughts or suggestions?
 
Last edited:

david.ng

New Member
Sep 16, 2023
2
0
1
OMG, I am exactly looking something like this.
my 8 sata + 2 cpu + 512G RAM NAS cost me 250w power to run 4 vm
(php web server + jellyfin + tv tunner server + ubuntu desktop watch youtube or netflix or 4k video and normal desktop excel word application, web browsing)
all vm are in very low usage, as I am the only user.

and my router only has 512G RAM, with openwrt running a nginx redirect service, free ram is less than 50M.

so I was hopping to replace above two with a 15w or so device.

I was looking at those popular small n100 router.
then found out only 16G of RAM, cant run 4 vm.

then I just found out 1165G7, it has 8 thread, max 64G RAM, 15w, and cpu mark double n100
the problem is... no 8 sata slot
I notice m2 nvme slot can extend 6 sata slot using ASM1166 chipset, one even with 2 8087 port which is 8 sata (chipset is jmb585+jmb575), but this one seems get bad review, as not full sata speed.
and seems no one has try this on the internet.

please share.
 
Last edited:

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
I came across this article about someone looking to do something similar. Apparently the 12th and 13th gen Intel CPUs have excellent idle consumption, it's an interesting read:


10G NICs can vary a lot in terms of their power consumption as well, some would draw 20W all on their own.

I don't think there's many m-ITX with 8 SATA ports, but there a lot of great ATX options, including DDR5 and PCIe 5.

Powersave settings are going to be super important, like HDD spin-down, C-states, not using always-on services, etc, that will make a huge difference. Interesting project!
 
  • Like
Reactions: tinker and Aluminat

tinker

New Member
May 15, 2021
7
1
3
I've been through this scenario and requirement more than once as you'd guess.
The main driver this time is that the Asustor 5304T I've been using reboots by itself and I'm fed up with sending it back and them not fixing the issue.

I'll share my efforts here.

Yes, some faster NICs, especially the SFP+ do take a lot of power, especially the long range laser type.

Good point on the other settings.

I've seen a few mITX boards with 6 x SATA, but not 8. It was a wish list and may change as time goes by.

As a stop-gap measure, I've re-commissioned an old Lenovo TS140. It's already populated with 32GB ECC, flashed PERC310 (to make it a dumb 8 x SATA) and USB3 gen 2 PCIE. I added a 2.5GBe single channel PCIE card. I may re-jig it later. The PERC may be replaced by a 10GBE card.
I have 5 x 18TB HDD connected to the onboard SATA, leaving a slot for the 10GBe NIC, when I find it.

Put the latest Unraid on it and the array is initialising now, just another 14 hrs to go.

TS140 specs.png Unraid array.png
 
Last edited:

tinker

New Member
May 15, 2021
7
1
3
@david.ng this may be a useful post for you

I may still go down that route, but we shall see.
 

SnJ9MX

Active Member
Jul 18, 2019
130
83
28
I was looking at those popular small n100 router.
then found out only 16G of RAM, cant run 4 vm.
Where do you see this? N100 supports DDR5, which has 48GB modules being released. 32GB should be very easy even with a single SODIMM slot. 48GB (if it isn't out yet) should be within a couple months.
 

david.ng

New Member
Sep 16, 2023
2
0
1
@SnJ9MX the N100 cpu max support ram is 16G
it is a cpu limited, not a sodimm limited.

I had been doing a lot of reseaech.

A2SDi-H-TF , can find some cheap online,
it got 12 sata, 1 m2, 25w power, 8 core 8 thread. one pcie 4x, max 256GB ECC RAM, itx, ipmi. 2 x 10G Lan port.
dream board,
but 25w,
no gpu (I know the ASPEED AST2400 can passthrogh as gpu, but weak, cant do video decode).
only one usb port.

then there is Morefine S500+ with Ryzen 7 5825U
it has 8 core 16 thread, 15W power, max 64GB RAM. got internal gpu, can do video decode, good for jellyfin server. one 2.5GB + one 1GB Lan port.
performance is 4 times as Atom C3758 in A2SDi-H-TF
I can use one of the m2 expand to 8 sata.
the other m2 expand as pcie card for tv tunner.
and the last m2 wifi go with a DBDC MT7915/MT7916 card to replace my router.
but 64GB RAM....

please share ideas.
 
Last edited:

SnJ9MX

Active Member
Jul 18, 2019
130
83
28
@SnJ9MX the N100 cpu max support ram is 16G
it is a cpu limited, not a sodimm limited.

I had been doing a lot of reseaech.

A2SDi-H-TF , can find some cheap online,
it got 12 sata, 1 m2, 25w power, 8 core 8 thread. one pcie 4x, max 256GB ECC RAM, itx, ipmi. 2 x 10G Lan port.
dream board,
but 25w,
no gpu (I know the ASPEED AST2400 can passthrogh as gpu, but weak, can do video decode).
only one usb port.

then there is Morefine S500+ with Ryzen 7 5825U
it has 8 core 16 thread, 15W power, max 64GB RAM. got internal gpu, can do video decode, good for jellyfin server. one 2.5GB + one 1GB Lan port.
performance is 4 times as Atom C3758 in A2SDi-H-TF
I can use one of the m2 expand to 8 sata.
the other m2 expand as pcie card for tv tunner.
and the last m2 wifi go with a DBDC MT7915/MT7916 card to replace my router.
but 64GB RAM....

please share ideas.
so I guess the 4x32GB modules I'm running on my 6th gen intel are a lie then and don't work? ARK lists what works at time of release. if a larger DIMM is released, it will almost certainly work, but ARK won't be updated. this is very well known.

here's an example of 32GB in N200, which is same generation and is no different from a max memory perspective than the N100: https://forums.servethehome.com/ind...nxxx-quad-nic-router.39685/page-6#post-376051
 

Marsh

Moderator
May 12, 2013
2,646
1,497
113
My plan is to keep Proxmox host and NAS function separate.

Current Promox host
HP 600 G6 SFF , I5-10500 , 4 x 16gb ram , 2TB team group nvme
2TB team group SATA ssd
Proxmox 7.4 idle at 4-5w
Add Dual port Intel network card
Proxmox 7.4 idle at 6-7w

My other VM host
Shuttle DH470 mini pc , I5-10500T , 2x16gb ram , 1 SATA SSD , dual port
W10 idle at 4.5w ( headless, no monitor )

Power saving champ , future Proxmox host
BIOS is default , out of box ,
Lenovo M80s SFF , i5-12500, 2x32GB ram, 1x 512gb nvme
W11 idle 4.5w-5w
I did not have time to install Promox yet.

I received N5100 NAS board last week, installed MB in a LianLi Q25 case, currently running UNRAID 6.12.4
Hosting 2 x 14TB sata drive, Emby docker.
It is performing hardware transcode really well.
Con:
I am disappointed with the power saving.
powertop --autotune carsh the machine.
I need to learn / tweak the ins and outs of the BIOS .


KHD N5100 NAS board , LL PC-Q08 / Q25 case , 2x8gb ram , 2 FANS
Unraid 6.12.4 idle at 21w
Add 1x256gb nvme ssd , 2 x 14TB spin up @33w
2 HD spun down @25w
Emby hardware transcoding , 1 HD , 38w
 

SnJ9MX

Active Member
Jul 18, 2019
130
83
28
I have found that the machines from established manufacturers for whatever reason are more power efficient than the non-established manufacturers. I have a Dell Optiplex 3070 SFF with i5-9500 (not T), 1x8GB 2666, 128GB NVMe that idles at 9W. A dell precision 3240 compact (i7-10700, 2x16GB, 512GB NVMe) that idles at 6W. Even a Dell R630 (1x xeon 2697v4, 8x 16GB, H330, and standard SSD) that idled at 53W. These are all great numbers. Dell makes low power stuff. Not clear how but many people are surprised with the single digit idle powers from highly capable machines.

The Optiplex/Precision idle numbers aren't much greater than a Raspberry Pi. 3W vs 8W, even at EU electricity prices, it is squarely in the "low enough" category.
 

mikegrok

New Member
Feb 26, 2023
16
4
3
Alabama, USA
I know that zfs hates usb. Zfs performs copy on write, and usb performs blind writes, and will often get ahead of itself. It relies on the OS to be extra diligent. So don’t expect performance on your nas with disks on usb. As of 2017 The os has to do reads after it writes to verify that the write event did not get discarded before it was written to media.

while zfs can export and import arrays, each time you scramble the id of the drive it has to do an extended check to verify that this is the same device.

as of 2017 each reboot or sleep or sometimes for no reason, the usb ids get shuffled. This plays havoc on your NAS. And causes disconcerting delays seemingly without explanation.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
If you want to hot add, move and remove disks without problems, the gold standard is LSI 12G SAS and HBA. With a unique disk ID like WWN you can move disks around and they remain their ID. USB is acceptable for a single removeable backup disk but not for regular pool disks in raid mode where it is a pain.
 
  • Like
Reactions: mikegrok

tinker

New Member
May 15, 2021
7
1
3
I'll reserve the USB external disks for backups and mass migrations.

The Lenovo TS140 (Unraid)
Only has 1 x PCIe 3 16 lane slot, 1 x PCIe 2 single lane slot and a 1 x PCIe 2, 4 lane slot
I could do with with a more modern MB.
But I'll use what I have to hand. I did buy some single lane PCIe single lane SATA card and single lane PCIe single lane NVMe card.

I have found my 10GBe card which is now plugged into 4 lane PCIe 2 slot
Which the PERC cards before. The HDD are plugged into the onboard SATA ports
The fast(er) NVMe 4 lane is plugged into the PCie 3 slot with a decent 2TB NVMe ssd which is allocated as a cache
A 2TB SATA SSD is a spare, may be used as a 2nd cache, if that's possible

I'm learning how to set up a ZFS array, cache, shares etc
 
  • Like
Reactions: mikegrok

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
The fast(er) NVMe 4 lane is plugged into the PCie 3 slot with a decent 2TB NVMe ssd which is allocated as a cache
A 2TB SATA SSD is a spare, may be used as a 2nd cache, if that's possible

I'm learning how to set up a ZFS array, cache, shares etc
ZFS use only faster RAM as write cache (10% RAM) and RAM for readcache (Arc, ca 70% RAM).
You can extend RAM readcache with SSD/NVMe (L2Arc). L2Arc is persistent. ZFS does not cache files
but only small io like last write/read or metadata. With enough RAM L2Arc usage is mostly near zero.

Slog is not a cache but a protection of the rambased writecache.
 
  • Like
Reactions: tinker

tinker

New Member
May 15, 2021
7
1
3
L2Arc
I've had to look up L2Arc and opened up a real Pandora's box.
It definitely isn't that straightforward. Never heard of ZIL till now. Sounds like a write transaction log.

Resources
The server has 32GB of ECC ram. I have 2TB NVMe and a slower 2TB SSD SATA drive.
5 x spinning rust HDD.

Recommendations?
Any suggestions on the best way to configure it?
Is there a easy to understand document that explains this, someone must've discussed this before?



More info
It's mainly for archival purposes, but good access speed for 5 users would be good.
I have a mostly 10GBe network.

Mix of files, from small docs to larger video files say 2-8GB.
Serve music files, mostly flacs, may be transcode to MP3 or similar.

I'll back up to individual 8-16TB external HDD. High resilience not essential. Though I have dedicated one drive for parity.

At the moment each HDD and SSD is formatted to ZFS
  • One HDD is for parity
  • Four for the data
  • 1 x 2TB SSD NVme
  • 1 x 2TB SSD SATA
 

alaricljs

Active Member
Jun 16, 2023
197
70
28
Have another box .. ZFS special vdev ZFS Metadata Special Device: Z (mobile not letting me do pretty links)

Basically it's intended as faster storage for metadata within a slower pool. Say a pile of hard drives and then all your metadata resides on ssd. Speeds up file ops like searching for a particular file by name.

It's also possible to set up a filesystem so that all files below a certain block size also land on that vdev.
 

chinesestunna

Active Member
Jan 23, 2015
622
195
43
56
@david.ng this may be a useful post for you

I may still go down that route, but we shall see.
That's a nice little case! I'm still looking for something that would do 8x3.5" in a dense arrangement without costing arm and leg
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
L2Arc
I've had to look up L2Arc and opened up a real Pandora's box.
It definitely isn't that straightforward. Never heard of ZIL till now. Sounds like a write transaction log.

Resources
The server has 32GB of ECC ram. I have 2TB NVMe and a slower 2TB SSD SATA drive.
5 x spinning rust HDD.

Recommendations?
Any suggestions on the best way to configure it?
Is there a easy to understand document that explains this, someone must've discussed this before?



More info
It's mainly for archival purposes, but good access speed for 5 users would be good.
I have a mostly 10GBe network.

Mix of files, from small docs to larger video files say 2-8GB.
Serve music files, mostly flacs, may be transcode to MP3 or similar.

I'll back up to individual 8-16TB external HDD. High resilience not essential. Though I have dedicated one drive for parity.

At the moment each HDD and SSD is formatted to ZFS
  • One HDD is for parity
  • Four for the data
  • 1 x 2TB SSD NVme
  • 1 x 2TB SSD SATA
Seems you use Unraid on top of ZFS.
With ZFS you use normally ZFS realtime raid instead Unraid "raid alike backup on demand with a parity disk".

For your use case the alternative would be two or three pools:
Pool1: 4 disks in raid-Z1 or Z2 for media data
Pool2: NVMe mirror for hot/critical data with regular backup to pool 1

Pool 3: SSD single/basic vdev: external disaster backup for critical data from both (can be located in a removeable USB case)

Main advantage over Unraid:
- better performance
- realtime protection/ auto repair on read or scrub

As an alternative you can use the NVMe mirror as a special vdev in pool1.
But in your case I would prefer 2 pools due better robustness against failures or disasters.

Disadvantage over Unraid
all disks always up beside a pool sleep situation, realtime raid needs more power

and: you don't need the L2Arc (no advantage with your use case and 32GB RAM)
L2Arc helps a lot in low RAM situations or many small volatile files and many users (like a university mailserver)
 
Last edited:
  • Like
Reactions: tinker

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
You could consider looking into one of these:

I have 5 running, with each 4 SATA SSD's, 64GB ECC RAM, 1x ConnectX-3 Pro a Mellanox SX6012 and all 5 nodes+switch consumes around 120-130w when idling.

Each node uses around 20w +/- when idle, that is with 4 SSD's, Xeon E3-1220 v5/v6+NIC - if I dropped the NIC and could live with just the built in 1gbps I would be below 20W for sure.

This is server hardware, ECC RAM+IPMI.

So if you can live with sub 30w idle I think it would be worth looking into this :)
 
  • Like
Reactions: tinker

tinker

New Member
May 15, 2021
7
1
3
Seems you use Unraid on top of ZFS.
With ZFS you use normally ZFS realtime raid instead Unraid "raid alike backup on demand with a parity disk".

For your use case the alternative would be two or three pools:
Pool1: 4 disks in raid-Z1 or Z2 for media data
Pool2: NVMe mirror for hot/critical data with regular backup to pool 1

Pool 3: SSD single/basic vdev: external disaster backup for critical data from both (can be located in a removeable USB case)

Main advantage over Unraid:
- better performance
- realtime protection/ auto repair on read or scrub

As an alternative you can use the NVMe mirror as a special vdev in pool1.
But in your case I would prefer 2 pools due better robustness against failures or disasters.

Disadvantage over Unraid
all disks always up beside a pool sleep situation, realtime raid needs more power

and: you don't need the L2Arc (no advantage with your use case and 32GB RAM)
L2Arc helps a lot in low RAM situations or many small volatile files and many users (like a university mailserver)
I used to do build and later design servers and moved on Netcache storage devices, this stuff is more complicated and involved than what I used to do for a living. So it feels like I'm back at school :). I don't understand the various permutations that are available. I'm just relying on knowledge that's 20 years old. So your suggestions and explanations are gold for me.

I've actually been running Windows 10 professional with a bunch of HDD and SMB share, which worked, then swapped over to Asustor, but the unreliability of the hardware has pushed me into re-commissioning the old TS140. So that's where we are now.

From what you have said above, I can skip the Unraid pooling and use ZFS to pool the HDD?
I was assuming that having Unraid pooling on top of the underlying ZFS format will give me the auto repair/bit rot protection?

I'll re-visit the HDD pooling.

Thanks!