Q: Low idle power home server/NAS

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
656
233
43
yeah, but your own list of desirable features include a need to support multiple 2.5" disks. How does using a RasPi fit in this case? I mean, a Helios4 might make sense, but this is a hobbyist board with a few USB ports (and no native SATA or SAS interface). Are you planning to slap the drives into USB2 enclosures and softRAID between them?
I actually have the perfect board for this, I've been testing one myself as a <$100 5x1GbE router with an intel I350-T4.

They've got a pre-built case for it but if I were using as a NAS I'd probably just use a SAS HBA with as many drives as I felt like hanging off of it and make the case myself.

Support for this board is way early, software is still very much alpha and I'm having trouble getting it to recognize my 10GbE or Infiniband cards, but there's a *lot* of potential here for <$100.
 

kapone

Well-Known Member
May 23, 2015
831
413
63
While I applaud the effort to shoot for RPi type of systems, that go under 5w idle...that's a bit of a misnomer. Why? The title of the thread says 'NAS". Even if the board was super super super (add a few more ;)) low power, in the end, you're adding HDDs to it. And probably big ones, in terms of capacity.

Once you do that...that super super super (did I mention, add a few more?) low power board is trumped by the power consumed by the HDDs (even in spin down), fans to cool them, a bigger power supply to support the drives etc.

And even then, some of the systems that have been mentioned here (like the thin clients, but there are others) are cheaper than or almost the same price as an ARM based board (talking Ebay prices of course) with 10x the compute power, and they don't take a whole lot more power.

In a different thread somebody mentioned that an HP 8200/8300 with an i7 (that's with 4c/8T) idled at like 11 or 12w.

Is it really worth going for the absolute lowest power board, ARM or otherwise, if you have to give up a LOT of compute capacity, with not a whole lot more power consumption? If you can't afford (I'm being facetious, but seriously...) another 10 or 15 watts...there's something wrong with the bigger picture.
 

WANg

Well-Known Member
Jun 10, 2018
894
512
93
While I applaud the effort to shoot for RPi type of systems, that go under 5w idle...that's a bit of a misnomer. Why? The title of the thread says 'NAS". Even if the board was super super super (add a few more ;)) low power, in the end, you're adding HDDs to it. And probably big ones, in terms of capacity.

Once you do that...that super super super (did I mention, add a few more?) low power board is trumped by the power consumed by the HDDs (even in spin down), fans to cool them, a bigger power supply to support the drives etc.

And even then, some of the systems that have been mentioned here (like the thin clients, but there are others) are cheaper than or almost the same price as an ARM based board (talking Ebay prices of course) with 10x the compute power, and they don't take a whole lot more power.

In a different thread somebody mentioned that an HP 8200/8300 with i7 (that's with 4c/8T) idled at like 11 or 12w.

Is it really worth going for the absolute lowest power board, ARM or otherwise, if you have to give up a LOT of compute capacity, with not a whole lot more power consumption? If you can't afford (I'm being facetious, but seriously...) another 10 or 15 watts...there's something wrong with the bigger picture.
Yeah, and that's the reason why I thought the entire thread went a bit...sideways. When we are talking about NAS, I didn't think it means something that can be done with, say, the equivalent of a Sheevaplug or Pogoplug. If the whole point was to serve up files in an acceptable fashion while penny-pinching on the electric bill, you can't do any better than the $15 ZSun with its 500mA draw off a bog-standard USB port running OpenWRT via a 64GB MicroSDXC cand serving via Samba. Hell, you can run 3 of them off a wireless network - one as a load balancer and the other 2 as redundancy/high availability - just set them up to rsync each other on every file change, three of those things will only use a combined 7.5 watts flat-out. Amuse friends, frighten enemies, hide your network storage devices where no one can find them. Oh yeah. And you can totally hang a USB external SSD drive off a modern DD-WRT capable router and it'll do just fine serving files up.

Seriously, I thought there were some minimal expectations on the original post as to features like running the storage off a RAID array or something like that.
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,149
530
113
Synology DS218+, Idles less than 10 Watts with a couple of 8tb+ hard disks installed. If you just want bulk storage at low power it’s hard to beat !
 

WANg

Well-Known Member
Jun 10, 2018
894
512
93
Synology DS218+, Idles less than 10 Watts with a couple of 8tb+ hard disks installed. If you just want bulk storage at low power it’s hard to beat !
DS218+, as in...2 drives? Well, it better be a RAID1 setup then.
 

Evan

Well-Known Member
Jan 6, 2016
3,149
530
113
DS218+, as in...2 drives? Well, it better be a RAID1 setup then.
Sure yes, but DS918+ is also less than 15w idle, still when you get to 15w and not just 6w or 7w then may as well run a sever board and get heaps more functionality.
 

WANg

Well-Known Member
Jun 10, 2018
894
512
93
Sure yes, but DS918+ is also less than 15w idle, still when you get to 15w and not just 6w or 7w then may as well run a sever board and get heaps more functionality.
Oh yeah, I totally and respectfully agree with that. A purpose-built server board/chassis with the right port types (the Synology or even something from iXSystems) will work much better since it's specifically designed to do what it is meant to do (serve files, manage RAID parity calcs). There's power sipping, and then there's having the firepower to do things when needed.
 
  • Like
Reactions: AlphaG

Evan

Well-Known Member
Jan 6, 2016
3,149
530
113
Oh yeah, I totally and respectfully agree with that. A purpose-built server board/chassis with the right port types (the Synology or even something from iXSystems) will work much better since it's specifically designed to do what it is meant to do (serve files, manage RAID parity calcs). There's power sipping, and then there's having the firepower to do things when needed.
Look at the synology specs, the 2 bay is power sipping for what it is, the 4 bay not so much, may as well make your own from a power point of view, the form factor is nice though.
J3455 @ 13w idle is about normal, no magic happening there.

Anyway just saying if NAS is the main objective some good low power off the shelf options.

C3955 (16-core, 64-128gb ram) can idle at 25w and run a good amount of other workload.
 

BlackHole

New Member
Jul 21, 2018
17
8
3
While I applaud the effort to shoot for RPi type of systems, that go under 5w idle...that's a bit of a misnomer. Why? The title of the thread says 'NAS".
There seems to be some confusion going on here.
The suggestion was made to have a two-tier setup. The RasPi is supposed to run the stuff that really needs to run 24/7 - and fire up a proper system on-demand, on-time or whatever I can automate.
Details: see above.
slap the OS on the SSD ...putting /var/log in ram and flushing to disk periodically (Google "log2ram",) ...I'm an experienced user though, I've been working with these arm boards for 3-4 years now, and I definitely think starting with an RPi is a good idea for someone new to the ecosystem.
Thanks for sharing your experience. I'll look into all these things once I have a system and had a first go at it. This is going to be my 2nd experience with Linux on Arm, but my 1st was god-awful, so I am quite wary. If "Write boot medium. insert. fire up, install and configure" fails the project's dead and the RasPi will be sold the next day. Sorry, but my patience with the Arm eco-nonsystem in general and Linux on Arm especially has run out 7 years ago. /rant
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
656
233
43
Thanks for sharing your experience. I'll look into all these things once I have a system and had a first go at it. This is going to be my 2nd experience with Linux on Arm, but my 1st was god-awful, so I am quite wary. If "Write boot medium. insert. fire up, install and configure" fails the project's dead and the RasPi will be sold the next day. Sorry, but my patience with the Arm eco-nonsystem in general and Linux on Arm especially has run out 7 years ago. /rant
Don't get me wrong, I'm not trying to push you to use a system you're not interested in -- I saw that you'd already committed and thought I'd throw a few bits of hard-earned experience out there. The reliability and ease of use of a lot of these systems has really improved over the last few years, I hope you'll be pleasantly surprised.

Good luck with the board when it arrives, I'm happy to answer any questions if things aren't working as expected.
 

ravindrad

New Member
Aug 30, 2020
1
1
1
This is a very informative thread and something which I have also spend some of my analysis time. I am running my NAS and media service with NUC 5th Gen i5 based machine with Ubuntu Server OS. I used my HP Microserver Gen7 with altra low voltage AMD processor for many years. I used FreeNAS ver 8.x and it served me well while I was not a power linux user.

Around 4 years back I switched to Linux Desktop as my primary computer and hence wanted my server to be more flexible and henced moved to Ubuntu Server OS. NUC with is not just appealing for small form factor, but CPU idle power consumption is around 7-8 watts at idle and goes upto 17-18 max. I have SSD/m2 and USB hard drives all using NFS Share and minidlna media server. Main reason for me to use NUC was to low power consumption rating and I am very happy with performance and it is generally very quiet.

I did try Raspberry Pi 3+ for NFS and minidlna but then it does suffer with big media files. So far very happy with NUC. I am looking forward to try ASUS mini computer PN50 with latest AMD Ryzen 4000 series processor as upgrading RAM for older PC is not easy.

I am looking for adding a server grade machine running linux OS as I m looking for better compute for some ML tasks. So far my NUC5 with 32GB RAM and Lenovo tiny M700 with 16GB RAM are doing job. Both with Intel gen 5 Core i5 processor. I am looking at same constraints i.e. Idle Power consumption should to be as low as possible.
 
  • Like
Reactions: Tha_14

EngChiSTH

Member
Jun 27, 2018
50
18
8
Chicago
DS218+, as in...2 drives? Well, it better be a RAID1 setup then.
respectfully disagree , absolutely not RAID them (RAID only protects from physical failure of the drive) in this configuration

Instead, create two separate volumes and have two copy of the data (with sync job in between if needed). Same storage usage as RAID 1 with less single points of failure - drive fails (no problem!), volume corruption (no problem!), deleted something accidentally (no problem!). Good luck doing the same for volume corruption using RAID 1 as you only have a single volume. or perfectly deleted data from redundant array is still deleted data.

don't overestimate value of RAID. don't assume its usefulness. and of cause, 'raid is not a backup' - have backups.
 

WANg

Well-Known Member
Jun 10, 2018
894
512
93
respectfully disagree , absolutely not RAID them (RAID only protects from physical failure of the drive) in this configuration

Instead, create two separate volumes and have two copy of the data (with sync job in between if needed). Same storage usage as RAID 1 with less single points of failure - drive fails (no problem!), volume corruption (no problem!), deleted something accidentally (no problem!). Good luck doing the same for volume corruption using RAID 1 as you only have a single volume. or perfectly deleted data from redundant array is still deleted data.

don't overestimate value of RAID. don't assume its usefulness. and of cause, 'raid is not a backup' - have backups.
First of all, read the entire thread. I actually made a snarky remark about using 3 Wifi USB plugs with mesh rsync to keep the data store consistent. while consuming minimal power.

Second of all, RAID is there to increase media resilience and maybe increase I/O throughput by taking advantage of parallelism (wanna double disk write output? RAID1 and etc). It's not there as a backup. That being said, there's nothing wrong with backing things up to disk (especially if the disk is being used to act as "warm storage" for rarely retrieved items that has to be accessible with minimal delay, like, say, music or video files) - just don't forget that depending on what disk you buy, the disk have MBTF ratings, and if it's that important and can be stored "cold", well, that's why LTO tapes and tape vaults exist.

Third - even with the synch-between-disks approach, you are still doing the same thing as what RAID does on a bit level, except now you'll need some kind of alert/subscribe protocol between volumes to track/implement filesystem changes (at the very least, something like inotifywait/rsync on the Linux side), and whatever granularity you have is still on the file level. That requires computing cores, and you can't buy performance by simply buying a RAID card to automate the tasks away.

Also, the separate volume/synch approach does not always guard against what you are trying to defend against. How does it fail?

- Drive failure
- If you base those 2 volumes on 2 disks from the same manufacturer/production batches. If one dies, the second one might have a higher incidence of dying due to possible manufacturing defects on the same batch (note: this also impacts RAID - if you buy disks from the same batch it'll hit you just as hard. But the point here is to illustrate that whether you RAID or not, not paying attention to where you source drives will bite you in the ass regardless of what you do with it.

- Volume corruption
So if one drive has a volume corruption, then the best case scenario is that the original drive is fine and the other is corrupt, in which case you have one good copy. You'll still potentially lose incoming data in flight since the data can potentially be be written to the good drive (which fails on synch), or the corrupt one (which fails on initial capture and might not be recoverable). Your worst scenario? Both drives have volume corruption but in different ways. So then, you'll lose the incoming data and you will likely lose the original.

- Guard against deleted data
That's not a feature of RAID, and the sync does not protect you from deletions or overwrites either. That's more of a filesystem feature like shadow copies in NTFS, or snapshots on LVM/zfs/apfs. If your filesystem does not support it (straightup ext4 or hfs+), you won't be able to guard against it.
 

EngChiSTH

Member
Jun 27, 2018
50
18
8
Chicago
WANg , my preference is to keep this respectful and civil . Not sure why you are so bothered by any opinion different that yours.

If you want to prove to me that having single points of failure (i.e. single volume subject to corruption vs two volumes) is good, that is unlikely to happen. If you want to believe that , sure - you are entitled to your opinion and I respect it. you are choosing different tools to address the same needs (data resiliency), great! It does not mean they are the only tools or that they only work in combination you use (i.e. snapshots work just fine on various RAID levels or without, etc). The attitude of 'X is required' (be it RAID 1 for 2 bay NAS or RAID 5 for 3+ bay NAS) and you can only run X in Y way is arrogant and in my opinion misguided..

I also find it funny that you are trying to somehow 'prove' to me that having single volume is bad due to concern of potentially losing incoming future data . Compare it please with GUARANTED loosing ALL of the data if you deleted it without backup from RAID 1 volume (but I had it redundant per this guy on the forum excuse) or losing access to ALL of the data (potentially losing all of your data) on the single volume if that is all you had (forget redundancy underneath)

you may see the difference between "assured lose of access to all of the data" vs "potentially impacting future incoming data inflight" or you may not. who cares about the second if you already suffered the first? Get real. Same for the disks from same production batches. get real. that would be the last of your concern if you suffer real data loss (vs scrambling to find some place, some where , anywhere, this data exists so you can restore).

2 is always greater than 1. Math does not care. it just works...peace .
 

WANg

Well-Known Member
Jun 10, 2018
894
512
93
WANg , my preference is to keep this respectful and civil . Not sure why you are so bothered by any opinion different that yours.

If you want to prove to me that having single points of failure (i.e. single volume subject to corruption vs two volumes) is good, that is unlikely to happen. If you want to believe that , sure - you are entitled to your opinion and I respect it. you are choosing different tools to address the same needs (data resiliency), great! It does not mean they are the only tools or that they only work in combination you use (i.e. snapshots work just fine on various RAID levels or without, etc). The attitude of 'X is required' (be it RAID 1 for 2 bay NAS or RAID 5 for 3+ bay NAS) and you can only run X in Y way is arrogant and in my opinion misguided..

I also find it funny that you are trying to somehow 'prove' to me that having single volume is bad due to concern of potentially losing incoming future data . Compare it please with GUARANTED loosing ALL of the data if you deleted it without backup from RAID 1 volume (but I had it redundant per this guy on the forum excuse) or losing access to ALL of the data (potentially losing all of your data) on the single volume if that is all you had (forget redundancy underneath)

you may see the difference between "assured lose of access to all of the data" vs "potentially impacting future incoming data inflight" or you may not. who cares about the second if you already suffered the first? Get real. Same for the disks from same production batches. get real. that would be the last of your concern if you suffer real data loss (vs scrambling to find some place, some where , anywhere, this data exists so you can restore).

2 is always greater than 1. Math does not care. it just works...peace .
First of all, since when am I "not civil or respectful" here? Do you understand the American practice of a civil debate? People on opposing sides present facts and evidence and argue their points across...which is normal. Just because someone disagrees with you doesn't make it uncivil or disrespectful. In fact, the tone of your language makes you sound uncivil and disrespectful (what the heck is "per this guy on the forum" excuse"?)

Second, my argument isn't against how bad a single volume or single drive is - what I am merely saying is that the basic premise of your method (2 separate volumes, synchronized) does not address the issues of guarding against physical drive loss, data corruption or deletions either.

Let's have a theoretical scenario. I have a pair of SD cards formatted to FAT32, one on slot A, one on slot B. I have a process that can write to A or B (not both at the same time). There is also another process that detects changes to A and updates B, and detects changes to B and updates A.

a) Card on slot A failing does not automatically mean the card on slot B cannot fail (pretty obvious)

b) Corruption on either card can result in several things:

- New data being written can fall on the good card, in which case the data captured would be good.
- New data being written can fall on the bad card, in which case the data captured would be bad.
- During the update stage, the good card tries to update the bad card, fails (bad volume) and in which case the entire thing turns into a similar scenario as a degraded 2-disk RAID1 array
- During the update stage, the bad card tries to update the good card, fails (well, either the update is not triggered by the bad card since it was never written successfully, or the bad card flags it as good and tries to write back some nonsensical data, which might or might not be accepted by the good card). At the very least the original data on the good card stays put or is corrupted by gibberish.

c) If I overwrite the files on one card, I overwrite the files on the other, and if I do it in a way where the inodes are overwritten, the data cannot be
recovered on the second card either.

For drive resiliency, that's just buying good drives (pay more for more resilient drives), and if you are being pedantic, buying them from different batches so they won't run into manufacturing defects. For guarding against volume corruption, that's the job of the filesystem sitting on top of the bit bucket to ensure integrity, and that has nothing to do with whether it is a standalone volume on a single disk or if it's RAID. For overwrite/deletion protection you would still need an filesystem feature like snapshots or shadow copies (which can supported on standalone filesystems in RAID setups (like zpools with snapshots). In both cases there is no deletion protection unless one is explicitly put in.

At the very least, the setup as you described functions similar to RAID, but instead of having it function as bitwise redundant, it's filewise redundant. But in terms of efficiency, instead of just repeating a write call on a different SATA port like the way you do , it has to run an alert process, a watcher process and do the write later.

If your argument is that RAID is somehow not protected against deletion, yeah, sure - deleting a RAID array will instantly put you into shit creek, but even then there are ways to recover assuming that you are fast about it (Christophe Grenier's testdisk saved me at least once in my career - so no, deleting a RAID1 array is not guaranteed to lose data). Once again, the same line of reasoning also exist on single volume setups.

And no, you don't need to use RAID on a multi-spindle setup - that's why Unraid also exist. However, it is just as presumptuous for you to assume that RAID is overrated and that it does not have useful purposes. Like I've said before - RAID is there to increase media resilience and to change disk I/O characteristics (better throughputs on writes, etc). And in the case of a synology DS2, it's just a cheap easy and well understood way to add media resilience to a small, not-that-critical NAS.
 
Last edited:

EngChiSTH

Member
Jun 27, 2018
50
18
8
Chicago
and generally think of the use case of most likely person buying a 2 bay off the shelf NAS - not technical, wants it to 'just work', expect wizard to take care of the everything, just plug in and go. not going to have anything else backing up the NAS (the 'I bought the NAS to _be_ the backup' attitude). performance is not noticeable to them at all and irrelevant in basic 2 bay system.

This person is going to plug their synology (or qnap or whatever), plug the drives, open the 'quick start' guide, and click 6 times expecting their are done. They are going to read 'RAID 1' is suggested for your two drive configuration, accept that and move on , and believe they did great job doing this 'very technical thing'. they may even pat themselves on the back and brag to their spouse that they 'took care of it'.

all will work until something real happens , i.e. same person wiping out a folder by accident. they will still have their good array with good two drives and no data. they would be mad, they call support and blame the manufacturer and then come to forums, be it AnandTech, smallnetbuilder, or even this site (but unlikely as this is more technical). then such person would hear that dreaded question 'didn't you have a backup?' or hear that RAID 1 was actually a dumb way of doing what they did. Hopefully that data they wiped existed somewhere else (on DVD or some USB HDD) and could be recovered.

This is the case with vast majority of users going for 2 bay basic NAS market.

completely different from people running virtualization clusters , domains, plenty of services ,etc. such people are probably not going to even consider 2 bay NAS for anything, are much more in tune of understanding various system bottlenecks (be in chipset/CPU itself, LAN, SATA, RAID combinations) and have completely different needs 2 bay are highly unlikely to ever answer. We are not talking about those people here as such people already know all of this..
 

WANg

Well-Known Member
Jun 10, 2018
894
512
93
and generally think of the use case of most likely person buying a 2 bay off the shelf NAS - not technical, wants it to 'just work', expect wizard to take care of the everything, just plug in and go. not going to have anything else backing up the NAS (the 'I bought the NAS to _be_ the backup' attitude). performance is not noticeable to them at all and irrelevant in basic 2 bay system.

This person is going to plug their synology (or qnap or whatever), plug the drives, open the 'quick start' guide, and click 6 times expecting their are done. They are going to read 'RAID 1' is suggested for your two drive configuration, accept that and move on , and believe they did great job doing this 'very technical thing'. they may even pat themselves on the back and brag to their spouse that they 'took care of it'.

all will work until something real happens , i.e. same person wiping out a folder by accident. they will still have their good array with good two drives and no data. they would be mad, they call support and blame the manufacturer and then come to forums, be it AnandTech, smallnetbuilder, or even this site (but unlikely as this is more technical). then such person would hear that dreaded question 'didn't you have a backup?' or hear that RAID 1 was actually a dumb way of doing what they did. Hopefully that data they wiped existed somewhere else (on DVD or some USB HDD) and could be recovered.

This is the case with vast majority of users going for 2 bay basic NAS market.

completely different from people running virtualization clusters , domains, plenty of services ,etc. such people are probably not going to even consider 2 bay NAS for anything, are much more in tune of understanding various system bottlenecks (be in chipset/CPU itself, LAN, SATA, RAID combinations) and have completely different needs 2 bay are highly unlikely to ever answer. We are not talking about those people here as such people already know all of this..
Once again, you are conflating the solution for one problem with another. RAID1 isn't dumb because its job is not (and was never meant) to guard against overwrites. It's there to boost media resilience - if you take your data and back it up to a single thumb drive (or USB hard drive - which people do all the time), well, you are relying on a single drive. When you do it for a 2 bay RAID1 array, all you are doing is spreading your risk over 2 drives and it's an easy thing to understand for non-techies. Turning on RAID1 and assuming that it guards against accidental overwrites and deletions is like assuming that the spare tire in your car guards against it slipping in the rain, and if the end user assumes that, well...that's just end user requiring education.

And if you do overwrite or delete whatever in a NAS, the first question someone would say is more likely "did you enable snapshots, shadow copies or the recycling bin feature in your NAS?". Then the next question would be " did you try using testdisk or the myriad of recovery software out there first?”
 
Last edited:
  • Like
Reactions: Tha_14