Q: Low idle power home server/NAS

Discussion in 'DIY Server and Workstation Builds' started by BlackHole, Jul 28, 2018.

  1. BlackHole

    BlackHole New Member

    Joined:
    Jul 21, 2018
    Messages:
    12
    Likes Received:
    8
    Caution: Wall of text incoming

    Hi everyone,

    my current HP Microserver Gen7 is overdue for retirement. It served me well as a home server/NAS all in one box.
    Looking forward, I'd like something more powerful to separate things a bit better - VMs or containers I haven't
    decided yet. The Gen10 seems to be lacking in that regard, so I am looking for sth. else (like a Gen10.5 with Ryzen
    embedded or Epyc 3000 - HPE pretty please with puppy eyes)

    Currently running CentOS: NAS with snapraid on 3x 3.5" HDD, SMB on 2x 2.5" SSD, Owncloud, irc and other stuff.
    The machine sits next to my cable modem, so it'll be headless and with strict size limits.

    So I came up with following requirements
    * low idle power, decent compute power CPU; power efficiency when busy not as important [high power cost, cooling, noise]
    * high efficiency PSU; maybe external DC [low idle power]
    * min. 4x 3.5" (hot)swap bays [bulk storage]
    * min. 2x 2.5" - be it bays or just some volume to strap an SSD into [data storage, cache]
    * SATA connectors for all that on the mainboard [low idle power]
    * min. 1x Nvme M.2 - on board or via AOC [OS/Vms/containers]
    * ECC memory - I doubt I'll exceed 32GB
    * x86 with AES-NI [ARM SOCs are too much hassle in terms of SW/long term driver support]
    * preferably no IPMI, either IGpu/APU or completely without GFX [idle power consumption; iGPU: transcoding]
    * 1G; must be able to disable any 10G [home network won't see 10G the next 5y; idle power consumption]
    * reasonably priced [price is not of utmost importance but I hate wasting money]
    * available in the EU
    * Size: WHD less than 25x45x40 cm - otherwise The Treasury will not approve funding
    * build this year for tax reasons :D

    Starting here, I checked the market for available options:

    A - Case options
    The limited space available limits the case options, which in turn limit the mainboard options.

    UNas NCS 810(A) - unavailable in the EU (permanently out of stock at kustompcs.co.uk)
    Silverstone CS380 - too big
    Silverstone DS380 - thermals, ITX only
    Silverstone CS381 - WHD 24x40x32 but not released yet
    Supermicro SC721 - ITX only
    Ablecom CS-T50 - ITX only, thermals look weak, unavailable in the EU
    InWin MS04 - ITX only, SC721 seems better
    InWin MS08 - too big
    Fractal Design node - no hotswap
    Norco ITX S4/S8 - ITX only, unavailable in the EU, 80mm fans
    many other - cheap design, bad thermals, unavailable in the EU - or all three

    My current favorite: Silverstone CS381 since uAtx gives more options for the mainboard and has more hotswap bays
    than required; still unavailable, but Silverstone has decent distribution in the EU, so I am hopeful.
    If ITX, I'd go with the excellent SC721 (and maybe swap the PSU).

    B - CPU uArch options
    The new system will face a 5+ year life time, so I'd rather get some current tech.

    Atom C3000
    + RDIMM
    + Storage, network requirements easily met
    - with all the delays not really current tech
    - basically all mainboards feature IPMI

    Xeon-D 1500
    + RDIMM
    + Storage, network requirements easily met
    - not really current tech anymore
    - basically all mainboards feature IPMI

    Xeon-D 2100
    - Insane idle power consumption - I don't need to look any further: just no

    Xeon E-1200v6
    + wide choice of boards/CPU
    + no IPMI, iGPU option
    + decent idle power
    - not really current tech
    - UDIMM

    Xeon-E 2100
    + Current tech
    + no IPMI, iGPU option
    + more compute power than E1200v6 possible
    - UDIMM
    - not available yet

    Ryzen 2000
    + Current tech
    + Available
    + no IPMI, APU option
    + 2x00G + 4xx chipset gives decent idle power
    - not server grade
    - UDIMM
    - Getting ECC support hard
    - Getting enough storage hard

    Ryzen V1000
    - practically unavailable (Sapphire & Udoo solution exist but lack... everything)

    Epyc 3000
    - unavailable

    C - Putting it all together
    From my research above I've come up with the following basic builds that need appropriate memory,
    PSU, and discs.

    Xeon E-1200v6
    * SC721 or CS381
    * AsrockRack C236 WSI or Fujitsu D3417-B2
    * Xeon E-1245v6
    + can buy today (for WSI)
    + iGPU no IPMI
    + 8 SATA (WSI)
    + good price/performance
    + good performance
    + discounts as Xeon-E is phased in
    - no M.2 on board -> via AOC (for WSI)
    - wait for case (for CS381)

    Xeon-E 2100
    Like Xeon E1200v6 but next gen
    * Fujitsu D3644-B
    * Xeon-E 2146G
    + iGPU no IPMI
    + really good performance
    + CPU upgrade options
    o 6 SATA
    o price/performance OK
    - wait for case (CS381), board(s)

    Atom C3000
    * SC721
    * AsrockRack C3758D4I-4L
    + can buy today
    + 12V-DC only operation
    + 13 SATA galore
    o AST2500 (faster and/or less idle power vs the AST2400. Patrick, got data? :D )
    o good enough performance
    - price/performance meh

    Xeon-D 1500
    * SC721
    * Supermicro X10SDV-8C-TLN4F or X10SDV-8C+-LN2F if available
    + can buy today
    + excellent performance
    o 6 SATA
    o 10G, but can be disabled (on -TLN)
    - price/performance meh
    - AST2400

    Ryzen 2000
    * CS381
    * Asus Prime B450M-AOC (expected to support ECC)
    * Ryzen 2400G
    + MAGA :)
    + APU no IPMI
    + Mobile derived die
    + Excellent price/performance
    + CPU upgrade options
    o Good enough performance
    o 6 SATA, disables M.2 though
    - M.2 via AOC
    - wait for suitable mainboard - may not happen

    So, yeah, I did my homework. But to no avail, lots of research reading datasheets, but no conclusive solution. My current favorite is the Xeon-E 2100 variant - but if the gear won't become available in time I need a readily available alternative to buy in November.

    I'd be thankful if anyone has comments or suggestions. Was some use-case ignored? Did I miss some interesting cases? Is there some technical option I overlooked, is there some concrete hardware that would be more suitable? General suggestions how to make that choice?

    If you had the patience to read all this: Thanks for your time!
     
    #1
    Tha_14 likes this.
  2. billc.cn

    billc.cn Member

    Joined:
    Oct 6, 2017
    Messages:
    36
    Likes Received:
    4
    On a similar boat recently. The E3 1200L series is the only viable option if low power is the most important factor. Xeon D/Atom C3000 is more SOC-focused than low-power.

    The Coffee Lake E2100 is also not going to be a lot better than the Kaby Lake E3 v6s, quoting Wikipedia:

    I wonder why you're against UDIMMs though? They are not that difficult to find and prices are not a lot higher than RDIMMs.

    Similar question for IPMI. Most modern BMCs uses 5W or less, but they are quite handy when you need them.

    Also what is AOC? Some kind of M.2 adapter?
     
    #2
    BlackHole likes this.
  3. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    E3-1200L's will idle at pretty much the exact same power usage as the equivalent E3-1200's. It's only the max TDP that's limited on the L versions (usually by a cap on clocks/turbos) - and most home servers will hopefully spend >95% of their time idling CPU-wise.

    I'm not entirely sure what this is supposed to mean. What's to stop something being a low-powered SoC? From my measurements at any rate the C3000 setups use less power idling than the nearest equivalent low-end E3 chips and boards (although finding direct comparisons is difficult).
     
    #3
    billc.cn likes this.
  4. billc.cn

    billc.cn Member

    Joined:
    Oct 6, 2017
    Messages:
    36
    Likes Received:
    4
    I meant that they are designed to be feature-rich SOCs first, so the power area may suffer. Xeon D is very power-efficient, but there's no low TDP part; C3000 is the polar opposite, with low TDP but performance per watt is rather poor.

    I have no doubt the latter is a good fit for single-purpose/appliance kind of use cases, but to run a few VMs with a usable performance, the higher TDP SKUs have to be used, which kinds of defeats the purpose. The only thing I really liked about it is the built-in 10G networking, but that, unfortunately, is the thing the OP doesn't need. Considering prices and availability, the older E3s are a lot better.

    Admittedly, all my research are based on TDP, because it is tough to find consistently measured, real-world power usage numbers for every chip.
     
    #4
  5. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,425
    Likes Received:
    279
    From my experience, trying to find low idle power CPU's is all but pointless. The power savings on most modern CPUs is at the max TDP end of the range, not the idle end. Also, depending on your workload, having a higher TDP CPU may actually save you power if it can finish the job quicker and return to idle at a much faster pace than a slower more power efficient CPU.
     
    #5
    K D likes this.
  6. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    TDP is, not to put too fine a point on it, bollocks. The only real metric of how much power is used on a given task is measuring how much power is used on a given task and manufacturer TDP for a CPU frequently has precious little to do with that. Xeon-D's will typically idle less than the E3/E5 equivalents, C3000's less still. Most of the "extra" components in the SoC won't actually chew any appreciable power unless they're actually being used. As to how much the power of the CPU or other subsystems is taxed for any given workload is almost entirely arbitrary.

    You have to know what your workload is and pick a platform whose strengths play to your workload. For instance, basing my last NAS on the A2SDi-8C-HLN4F not only got me about 5W lower idle draw than the celeron system that preceded it, but because the SoC also has twelve SATA ports I was able to completely ditch the ~5-10W 8-port HBA as well, and none of the tasks this computer does will typically be limited by the inferior IPC of the goldmont architecture (in fact, the only task this machine does that's CPU-limited is the single-threaded rsync delta comparison of a number of different disc images as done by a backup job, and that's not time critical).

    Not sure why you think you need to run "high TDP" chips in order to run VMs. The aforementioned A2SDi-8C-HLN4F is running two VMs under KVM very happily and it's got plenty of headroom for more.
     
    #6
    CookiesLikeWhoa likes this.
  7. BlackHole

    BlackHole New Member

    Joined:
    Jul 21, 2018
    Messages:
    12
    Likes Received:
    8
    Thanks for all the answers!
    Oooh, I wasn't aware of that. Very useful information.
    Did some googling and found some data at Intel Core i7 8700K / i5 8600K / i5 8400 'Coffee Lake' review: affordable six cores!
    From the link: 32W to 40W (full system incl. 1080Ti) - Kaby-Lake is approx. 8W lower at idle -> Kaby Lake is it.
    I'm just not too fond of ECC UDIMMs. The market for ECC RDIMMs is just so much bigger and there's more choice and supply - that's all.
    I'm try to get into the 20W territory - entire system from the wall, idle with HDDs spun down. 5W is quite a significant number at this point.
    The utility of a BMC is undisputed - but I've run my Microserver Gen7 many years without, so I'm willing to trade this for the power saving.
    Add-on-card as an M.2 adapter, thinking of Super Micro Computer, Inc. - Products | Accessories | Add-on Cards | AOC-SLG3-2M2
    That was my understanding of the matter as well, thanks for reaffirming that. This 95% idle time figure will apply to my machine as well, that's why that's my focus.
    Generally speaking yes, but in my use case (I guess) no. My machine idles most of the time, so I am less worried about the CPU (they're all pretty decent now), but the entire platform. Getting 8yo data center surplus that idles at 250W will really hurt my wallet.
    Its great for estimating the max. sustained power consumption - thermal design power kindof gives that away. But that's it, there's nothing about average on minimum power consumption to be drawn from it.
    What gen Celeron was the old system base on? Just to get a feeling for the numbers Atom<>Desktop part.
    'Know your workload' is the idea behind my musings. An Atom board like that would be a no-brainer - if there were any without BMC and enough SATA ports (sorry Tyan). Or did I miss some product from a less prominent brand?
    With BMC, an Atom board idles at the same power as a Xeon-D (4c sure, 8c about) - around 20W, see STH comparisons. Given the choice, I'd take the beefier Xeon-D - because in my case, 6 SATA is enough.

    Anyway, excellent input. Given what I learned about Coffee vs. Kaby Lake, Kaby is now my favorite.
    (Am I the only one who considers Whiskey Lake the stuff the investors will need with how 10nm is proceeding?)
     
    #7
  8. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    Stop trying to put too fine a point on it! ;) In a scenario like a home file server that's idle 95% of the time, the numbers from the TDP don't really give you any indication of what power usage is going to be - as you say, it's "the whole system" power usage you're concerned with, TDP only tells you how many fans and how big a HSF you might need if you decide to run it at 100%.

    The old system used a Haswell Pentium G3220 but, as noted, apples-to-apples is hard because of the motherboard and HBA differences - going with the Atom was an easy choice since it'd mean I didn't need to use the HBA. In all cases the power usage of the CPU and motherboard is usually dwarfed by the idling of the HDDs - I don't use spin-down since in my experience that drastically shortens the lifespan of the drives (myt £0.02, YMMV, not scientifically tested, not endorsed by Kelloggs etc).

    That's pretty much it, yeah. If you're not concerned about maximum TDP or having a BMC, a bog-standard Xeon would have almost the same idle draw and would likely provide better performance if and when it's needed. The usual NAS workload - long-running tasks that trickle along at ~5-10% CPU - likely won't show much in the way of difference between the two CPUs either.

    I'm not aware of any server-esque Atom boards suitable for NAS usage that don't have the IPMI - I think all of the SM • A2SDi models have them. There is this model, although no ECC and limited to 8GB of RAM supposedly. All of their other E3000 boards have only one or two SATA ports.
    A2SAV | Motherboards | Products - Super Micro Computer, Inc.

    <wishing someone at some point would bring out a nice Epyc 3000 board suited for the home server market but looks like I will be waiting a long time>
     
    #8
    BlackHole likes this.
  9. chinesestunna

    chinesestunna Active Member

    Joined:
    Jan 23, 2015
    Messages:
    516
    Likes Received:
    101
    Might be off but less of a platform/cpu recommendation but rather a use case recommendation, as this is solely a NAS box, does it need to be on 24x7? Especially if power use during idle is your concern, I believe putting the system to sleep and waking up when accessed using WOL etc would be easier than to squeeze every W of savings through hardware.
    I went through a similar build attempt/thought exercise and that was ultimately my conclusion.
     
    #9
  10. BlackHole

    BlackHole New Member

    Joined:
    Jul 21, 2018
    Messages:
    12
    Likes Received:
    8
    TY for the info. I am aware of the spindown issue and am running desktop HDDs. I get on average 1 spinup/day, since most data transfer is either some large bulk or some continuous streaming. So I am not too worried.
    I provide other data, say the ownCloud DB and the 'Documents' network storage from the SSD, so most operations do not need the spinning rust.
    This confirms my preference for the Kaby Lake solution. Also, re BMC, the C236/Xeon combination allows for DASH/AMT. Dunno if that's any good.
    The A2SAV is too lean for my taste. I could go with the Gen10 Microserver to get that and would be better on the memory front. And Epyc 3000... *drool*. Lots of oCuLink. HP Microserver Gen10.5 with 4x(U.2 or SATA) hotswap....
    (My guesstimate that this was supposed to happen, but AMD didn't get the silicon done in time. See how there is no other user of the famous Operon X3000 chip line - under that brand? I bet AMD is 'selling' them at a loss to retain the customer. Again: Just my guess, not facts.)
    I appreciate your thought and agree, if this was only a NAS box this would be the best way to go.
    However, I am running some things that require 24/7 operation. The users of the ownCloud install will not accept having to WOL the machine - tried that, didn't go down well. In terms of usability, you're always compared to Google...
    While this would still allow for 6-8h downtime per night, my IRC stuff needs to be up all the time.
    I considered using as Raspi for the 24/7 stuff, but my gut feeling is that an additional power consumer helps little, but adds the hassle to maintain two systems on two uarchs with two different distros.
    Or do you think the latter might be worth it?

    Again, very interesting thoughts.
     
    #10
    nthu9280 likes this.
  11. chinesestunna

    chinesestunna Active Member

    Joined:
    Jan 23, 2015
    Messages:
    516
    Likes Received:
    101
    Gotcha, well it all comes down to use case and balancing the factors. I used to favor one large array with everything but I realized some of the data I use so rarely it's more like cold storage. I've since moved them off to a smaller 2nd array that's sleeping save for 2-3 times per month maybe. If performance requirements aren't high and power use is a concern, tiered storage could be an idea? Have a smaller SSD (lower power/heat) running ownCloud stuff and all the media on spinners that's sleeping.
    Back of napkin math I did before to help me frame things, power cost is about $.11 per KWhr where I'm at, so each 1W increase at the tap running 24x365 = $.96 per year, round up to $1 per year. So say if you have something that saves 50W (on average), costs $50 upfront (cap ex), you'll break even in first year, then afterwards you're ahead assuming same use cases.
     
    #11
  12. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    Back-of-a-fag-packet tells me that the expense of building and running two servers, one for 24/7 services, one for a few-hours-a-day of file server duty, doesn't work out economically - although since it looks like you're providing services to others, keeping a RPi with a warm standby of your owncloud instance and other essential services would be sensible - it's likely a fun project to try automating.
     
    #12
    Tha_14 likes this.
  13. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    Missed this when I saw your post first - I've been a customer of theirs for over 15yrs now and they've always provided exceptional service - I was the person who asked if they were aware of the existence of this case and if they'd consider selling it, and that I'd happily pay them extra to import on my behalf - happily he said yes and has continued to sell them since. But the volume is low so he tends to only import when he has buyers lined up - if you're interested in this case I'd suggest pinging Graeme an email and asking him on the status. He won't spam or give you the hard sell or anything like that.

    (Not trying to shill but haven't included the email address here but it's right there on the website. As above, I'm just a very satisfied customer)

    ...and in fact I'm actually going to move my build from my NSC-800 into the InWin MS08 in the coming week (which is why this thread came up in the search). I can afford the extra space now (although noise will be another matter), and I still don't think the case is that big considering the layout. We shall see...
     
    #13
  14. BlackHole

    BlackHole New Member

    Joined:
    Jul 21, 2018
    Messages:
    12
    Likes Received:
    8
    Yeah, in terms of pure economy it's not even worth thinking about it if you're in a proper job (even at 3 times the power costs like I pay). Just use whatever you like/have and do one billable hour additionally per year - and come out ahead.
    But, hey, it's a hobby.
    My thoughts exactly once you guys gave me the idea. Talked to a coworker who already did some Raspi stuff, and we came to the conclusion that the 3b+ should be up to the task. Ordered one today -- even if it's just to tinker with. Will post a pic if I'll take it live.
    Seeing that there's an official Fedora for Raspi made the decision much easier. I am a Redhat/emacs guy and don't do Debian/vi :D
    Talk about good word-of-mouth.
    I'll keep that in mind as plan B. I like the CS381 better in terms of internal layout & the SFX PSU, so I'll wait until about the end of September to see if it materializes. If not, I'll attempt the contact you provided. Kaby Lake discounts should be better by that time, too. I hope.
    Hmm, MS08 513 x 170 x 340 mm = 29,7l vs CS381 398(W)×320(D)×232(H)mm = 29,5l - you're actually right, I expected at least a 30% difference.
    But the 520mm dimension kills it for me anyway.
     
    #14
  15. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    I did actually try the DS380 before the NSC-800 as it was considerably easier to get hold of as well as cheaper, but ended up returning it as the cooling and build quality were both sub-par IMHO. Hopefully they've ironed out the kinks.

    The serious limitation of the Pi's has always been IO. Network IO won't be much of a problem for a web server, but file IO on the database might be something of a concern, even on a good SD card - but you'll know better than me what your current IO requirements are like. Incidentally I saw that owncloud supports using sqlite as a database; you're probably not using this as a DB yourself already, but for my home stuff I find it invaluable and use it wherever it's supported (e.g. mediawiki, my pale moon sync server) - piece of piss to set up and back up as it's just a single file, much less admin overhead and scales well enough to cope with a decent number of users. Worth investigating at least I think and certainly behaves better than postgres on an RPi IME.

    Post reported >:^(
     
    #15
    arglebargle and Tha_14 like this.
  16. arglebargle

    arglebargle H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈

    Joined:
    Jul 15, 2018
    Messages:
    299
    Likes Received:
    83
    I've been steadily moving all of my 24/7 services over to low power machines for the last year or so in an effort to reduce my power footprint. If you're interested in using arm devices as servers I can share a few things I found helpful.

    Make sure you're using a high quality power supply. The two most common issues with arm SBCs are instability due to flakey power and fs/performance problems due to low quality SD cards. I usually use 2.5-3A 5.1V supplies for micro USB boards, this keeps voltage in spec under load and with slightly longer cables.

    @EffrafaxOfWug is right, IO is the achilles heel of most of these arm boards. The RPi is a bit of a dog here, all four USB ports and the ethernet PHY are attached to a USB2 hub and connected to the SoC through a single USB2 port. You'll get iperf results around 280Mb/s on the 3b+ but if you start involving USB storage you're splitting that bandwidth between devices.

    You're going to be IO bound waiting on your storage on these boards a lot, especially if you're using an SD card to run the OS. The single best upgrade you can make on most of these arm systems is dumping the SD card and moving the installed system to a cheap SSD. This also ups system reliability, you can't monitor SMART for signs of pre-failure on an SD card but you can on an SSD.

    Model 3 and later RPis support USB boot, you don't need to boot from an SD card and then load the root from USB storage, so the first thing I do on these is burn the one-time fuse to enable that.

    I've had really good luck buying cheap used SSDs for my arm boxes on ebay, I don't think I've paid more than $15 -17 shipped for a 128gb drive in the last six months. Add a $10 UASP enclosure from Amazon and you're still getting a better deal than buying a 64 or 128gb SD card, plus 4k random IO performance will be significantly faster.

    There are a handful of newer ARMv8 boards that you might want to check out too-

    I love these as ghetto little terminal servers and UPS watchdogs:
    Oranje Pi Nul H2 + Quad Core Open bron 256 MB development board beyond Raspberry Pi in Oranje Pi Nul H2 + Quad Core Open bron 256 MB development board beyond Raspberry Pi van demo board op AliExpress.com | Alibaba Groep
    note: the onboard "wifi" is literal garbage, I only use these for wired application.

    This is my go-to when I want functional wifi, note the 8gb onboard emmc:
    Oranje Pi Nul Plus2 H5 Quad core Wifi Bluetooth mini PC Beyond Raspberry Pi 2 Groothandel is beschikbaar in Oranje Pi Nul Plus2 H5 Quad-core Wifi Bluetooth mini PC Beyond Raspberry Pi 2 Groothandel is beschikbaar van demo board op AliExpress.com | Alibaba Groep
    These make amazing all-in-one wifi CUPS servers and they're surprisingly good RetroPi machines too.

    This is my current favorite and what I tell everyone who's considering a Pi as a server to buy:
    ROCK64 – PINE64
    Notable features: the 2gb model is the same price as a new RPi and features a USB3 port, optional emmc storage, actual gigabit ethernet and a DC-in barrel jack.

    I've had a Rock64 acting as a docker host for a bunch of 24/7 services for the last few months and I love it. The 4gb model is only $45, throw zram on there and feed it 2gb ram using lz4 as a swap device and you're looking at ~8gb usable memory. You can run a surprising number of services on this board, I'm working with Graylog and FreeIPA now, I highly recommend it if/when you're looking for something with better IO than the RPi.
     
    #16
    Last edited: Aug 6, 2018
    BlackHole and SlickNetAaron like this.
  17. WANg

    WANg Active Member

    Joined:
    Jun 10, 2018
    Messages:
    219
    Likes Received:
    84
    Heh. Have you read my writeup on using an HP t730 thin client to upgrade the processing power on my N40L (retain the G7 as an iSCSI box and use something else to do the computing)? You might be able to do the same slapping something like a SuperMicro E300-8D in front of it and running the storage via iSCSI (no 10Gbit love? a pair of quad 1G will do the job). I would suggest picking up an EliteDesk 800G2 SFF, but then, if you need ECC support, so that's naturally a no-go.
     
    #17
  18. BlackHole

    BlackHole New Member

    Joined:
    Jul 21, 2018
    Messages:
    12
    Likes Received:
    8
    First: RasPi is still in shipping, so no real news there.
    TY for your very useful information. The PSU, eh, wall wart I ordered is a RasPi approved model, so I hope that means what I think it means... I was planning to attach an SSD via USB already, but never considered installing the OS on there. User data, /var/log etc. was my idea, while using the SD card for more static data. I'll have to look into that - if that's for me, and how its done.
    I am still very unsure about other ARM boards - I was not planning to put too much effort into getting a standard Linux distro to run from it. I chose the RasPi for the widespread out-of-the-box support and the huge community.
    Though I really liked the Odroid XU4 from the specs - until I saw the secure boot hassle they implemented.
    So yeah, this is me being lazy looking for a much PC-like setup/maintenance experience.
    I did, however I concluded that this was not what I had in mind. Running two 'big' boxes all the time would not fit my 'low idle power' definition, esp. if the Microserver needs to be beefed up with some add-in card. Very creative solution, though. The use of the thin client gave me ideas, but I still think the RasPi suggestion will work better for me. Word is they consume 3W idle power - that'd be perfect. I'll try that first.
    Also, my Microserver is really due to for retirement, with 5+y of 24/7 operation. Fans are getting noisy etc. It was great value, very reliable - if it it had AES-NI I might be tempted to repair.
     
    #18
  19. WANg

    WANg Active Member

    Joined:
    Jun 10, 2018
    Messages:
    219
    Likes Received:
    84
    yeah, but your own list of desirable features include a need to support multiple 2.5" disks. How does using a RasPi fit in this case? I mean, a Helios4 might make sense, but this is a hobbyist board with a few USB ports (and no native SATA or SAS interface). Are you planning to slap the drives into USB2 enclosures and softRAID between them?
     
    #19
  20. arglebargle

    arglebargle H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈

    Joined:
    Jul 15, 2018
    Messages:
    299
    Likes Received:
    83
    The official supply and the Canakit work fine, you'll be alright there. Definitely slap the OS on the SSD as well, performance will be much, much better. An SSD will consistently outperform even "fast" A1 rated SD cards in small random IO tests, and small random IO is about all the operating system is going to be doing most of the time. You could look at putting /var/log in ram and flushing to disk periodically (Google "log2ram",) that's what most of the Pi optimized distributions tend to do. I'm not sure if Raspbian does this by default, it's been a while since I've booted an actual RPi and Armbian does a lot of these things out of the box.

    I use Armbian on almost all of my arm boards. I think all of the boards I listed above are officially supported (the Rock64 might still be beta/WIP) so the install experience is about the same as an RPi: write an image to an SD card using Etcher, pop it in and boot.

    Between the stock optimizations in Armbian for embedded use plus the value offered by the clone boards I don't see much reason to use an RPi anymore. I'm an experienced user though, I've been working with these arm boards for 3-4 years now, and I definitely think starting with an RPi is a good idea for someone new to the ecosystem. If you do find yourself wanting better IO or processor performance definitely check out the alternative boards I listed, I'm extremely happy with all of them.

    Here's the link again re: USB mass storage boot, I can't recommend this highly enough over using an SD card:
    How to boot from a USB Mass Storage Device on a Raspberry Pi 3 - Raspberry Pi Documentation
     
    #20
    BlackHole likes this.
Similar Threads: idle power
Forum Title Date
DIY Server and Workstation Builds Low Power ESXI build vs Supermicro E300 Oct 11, 2018
DIY Server and Workstation Builds Intel NUC Thunderbolt networking for super low-power cluster? Jul 30, 2018
DIY Server and Workstation Builds The lowly Rack PDU / Power Strip / UPS Jun 28, 2018
DIY Server and Workstation Builds Power supply sizing for X10SDV build Jun 26, 2018
DIY Server and Workstation Builds Power protection for many servers, I am lost - any advice? Jun 2, 2018

Share This Page