I have a small home network (5 computers), and use a Synology NAS to share files between them. I'm thinking of augmenting this NAS with a server, either a refurbished 'professional' unit or a DIY home built unit and am seeking advice on the pros and cons of such a system along with suggestions on a suitable route forward. My initial thoughts are to purchase a refurbished HP DL380e G8 P822 Server and populate it with 3.5" SATA hard drives (I have a few spares lying around), gradually increasing the number of hard drives over time as my needs increase. I accept that this alone won't give me much more than the Synology NAS already provides but, as my knowledge and experience increase, I'd be interested in learning and playing with virtualisation. For that, I think a server rather than a NAS is a better option. I'd appreciate comments on the general concept and suggestions as to suitable hardware - eg the server chassis itself, appropriate CPUs, how much RAM, a good operating system (I'm thinking FreeNAS or UBuntu, open to other ideas).
This will be my first server, and I'm new to the concept of servers, but eager to learn.
Okay. Let's take a fairly holistic approach to this - consider where you are planning to run this, how you plan to cool it, and how you plan to run it.
a) How much is a Kilowatt hour of electricity in your area?
Example: I live in NYC and the power is roughly 0.29 USD/kilowatt hour of power. 500 watt (5 Amps) is roughly what you'll expect to keep a dual socket Ivy Bridge-E server with 8-12 DIMM slots populated and 3-4 drives spinning (they idle much less, but don't be surprised if they run harder and harder as they age out and software complexity grows. Assuming that the servers run at 500 watts constantly 24 hours a day, you are looking at ~3.50 USD/day of usage, or a little over 90 USD/month. Of course, we only assume about 20% load usage, so it'll still be ~20 USD/month But that's not all...
b) How good is the wiring in your residence?
Once again - this is a question of your local building code and whether it's upgraded. In an average US household a single power circuit to a small room is around 13-15 Amps, or slightly under 1500 to 1750 Watts. If you are housing it in your private home, you don't want the simple act of using microwave oven to heat up food (or using a hair dryer) will trip the circuit breaker...or worse...
c) How much noise can you tolerate?
Some homelabbers like to discount noise generation as a concern, but one of the things that people tend to underestimate is how noisy rackmount servers really are. When they start up there is a 2 minute period where they'll spin up all their fans (usually low diameter, high RPM units due to rack height concerns, which sound like jet engines), and your home will sound like the parking area in a major airport. Most of the time the fans will spin back down (the R630s that I administer at work would do that), but not always. The Proliant G8s are known for keeping the fan up to 35% even on idle, which makes for quite a racket. If you plan to stick a server out in the open at your home and you have family members there...well, don't. They will complain about the incessant noise.
This of course brings us to -
d) How would you protect them?
Unless you live alone, you will almost always want to protect the server from someone who might randomly poke at it. Maybe it's a family pet? Perhaps it's your curious grade school nephew? Or perhaps it's the caretaker for your elderly mother who accidentally bumped into the server or needed the power outlet to run a vaccum cleaner and unplugged it for 30 seconds (which knocked it offline). Even if you live alone, there are regional environmental risks, like power surges, flooding, and etc.
e) How do you plan to cool it?
The Ivy Bridge-E Xeons on those Proliants are 95w TDP (max), and even on idle they generate some heat. So does the SODIMMs. So does the drives. And as does the power supplies. On the 2U model of that Proliant there are 6 fans, and all that airflow need to go somewhere. There's a good reason why Office server rooms are sealed, air conditioned rooms (which has its own power requirements) with noise abatement tiles all over. You will also read about how STH forum members build their own server rooms in their homes, complete with moisture sensors, UPSes, extra power circuits, AC (or forced ventilation cooling), locks and all that stuff.
f) Are you tossing good money into bad?
That’s an American saying, but it resolves around the idea of making a bad investment and then sinking further money into it in an attempt to recoup the loss. For example, those Ivy Xeons off the Proliant DL380 Gen8 are from 2012, which means that in all cases they have already went through the usual 3 to 5 year depreciation cycles of corporate environments and are, in all sense of the word...worthless (at least to any one using them to earn money). Unless it was sold to you at fire-sale pricing (or as a free hand-me-down) you will still have to pay for shipping. And when you factor in discounts, clearance and sales, it’s often not worth it.
For the homelabbers, you will have to buy DDR3 memory for them, which is fine, except that you cannot carry them forward to the next machine you have (because they will all likely be DDR4 based). Then there is the question of whether the chassis will support the features you want to learn. Can those old chassis support PCIe lane bifurcation for NVMe? What about lights-out management? Is it worth it to pay actual money to HPe for an iLO license to access your 8 year old server? Then of course there’s the question of functional obsolescence - look at the belly aching that resulted when VMWare released VSphere 7 and support for some rather old (and some rather new, in the case of Realtek NIC) hardware were dropped. Red Hat already talked about dropping support for anything prior to Haswell-E on their next release.
The thing about server hardware in general is that there are big ones, and there are small ones. They all play a role in the ecosystem - but not all have the best interests of homelabbers in mind. Some have the peculiar mix of modern tech, good expendability, efficiency and quiet that makes them excellent for home use, and some are also inexplicably cheap.
Some of us are content with a Proliant Microserver Gen7/8/10/10+, some of us went for the SuperMicro SYS-5028D-TN4T (a good machine but the prices didn’t really drop much in the past 4 years). Some of us picked up
Proliant EC200as, (a bit limiting in my opinion) or we got Corporate NUCs (project
TinyMicroMini) to cluster machines up. Hell, I bought Dell Wyse/HP thin clients to act as server hardware, and that actually worked out quite well, but then, I have specific needs and constraints.
The question really is...do you know what you really need, and what you are willing to compromise upon? How much electric power can you afford per month, how much power can you safely consume, what do you want to do with it, and what is your plan to maintain it? Can't really talk war without first talking about logistics, and you can't talk about what to buy if you can't nail down what you can live with.