So, pros and cons of replacing a Synology NAS with a server...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Yep, spec'ed to requirements on #disks, cpu, size, noise and power utilisation... or whatever compromise you go for;)
 

WANg

Well-Known Member
Jun 10, 2018
1,308
971
113
46
New York, NY
I have a small home network (5 computers), and use a Synology NAS to share files between them. I'm thinking of augmenting this NAS with a server, either a refurbished 'professional' unit or a DIY home built unit and am seeking advice on the pros and cons of such a system along with suggestions on a suitable route forward. My initial thoughts are to purchase a refurbished HP DL380e G8 P822 Server and populate it with 3.5" SATA hard drives (I have a few spares lying around), gradually increasing the number of hard drives over time as my needs increase. I accept that this alone won't give me much more than the Synology NAS already provides but, as my knowledge and experience increase, I'd be interested in learning and playing with virtualisation. For that, I think a server rather than a NAS is a better option. I'd appreciate comments on the general concept and suggestions as to suitable hardware - eg the server chassis itself, appropriate CPUs, how much RAM, a good operating system (I'm thinking FreeNAS or UBuntu, open to other ideas).

This will be my first server, and I'm new to the concept of servers, but eager to learn.
Okay. Let's take a fairly holistic approach to this - consider where you are planning to run this, how you plan to cool it, and how you plan to run it.

a) How much is a Kilowatt hour of electricity in your area?
Example: I live in NYC and the power is roughly 0.29 USD/kilowatt hour of power. 500 watt (5 Amps) is roughly what you'll expect to keep a dual socket Ivy Bridge-E server with 8-12 DIMM slots populated and 3-4 drives spinning (they idle much less, but don't be surprised if they run harder and harder as they age out and software complexity grows. Assuming that the servers run at 500 watts constantly 24 hours a day, you are looking at ~3.50 USD/day of usage, or a little over 90 USD/month. Of course, we only assume about 20% load usage, so it'll still be ~20 USD/month But that's not all...

b) How good is the wiring in your residence?
Once again - this is a question of your local building code and whether it's upgraded. In an average US household a single power circuit to a small room is around 13-15 Amps, or slightly under 1500 to 1750 Watts. If you are housing it in your private home, you don't want the simple act of using microwave oven to heat up food (or using a hair dryer) will trip the circuit breaker...or worse...

c) How much noise can you tolerate?
Some homelabbers like to discount noise generation as a concern, but one of the things that people tend to underestimate is how noisy rackmount servers really are. When they start up there is a 2 minute period where they'll spin up all their fans (usually low diameter, high RPM units due to rack height concerns, which sound like jet engines), and your home will sound like the parking area in a major airport. Most of the time the fans will spin back down (the R630s that I administer at work would do that), but not always. The Proliant G8s are known for keeping the fan up to 35% even on idle, which makes for quite a racket. If you plan to stick a server out in the open at your home and you have family members there...well, don't. They will complain about the incessant noise.

This of course brings us to -

d) How would you protect them?
Unless you live alone, you will almost always want to protect the server from someone who might randomly poke at it. Maybe it's a family pet? Perhaps it's your curious grade school nephew? Or perhaps it's the caretaker for your elderly mother who accidentally bumped into the server or needed the power outlet to run a vaccum cleaner and unplugged it for 30 seconds (which knocked it offline). Even if you live alone, there are regional environmental risks, like power surges, flooding, and etc.

e) How do you plan to cool it?
The Ivy Bridge-E Xeons on those Proliants are 95w TDP (max), and even on idle they generate some heat. So does the SODIMMs. So does the drives. And as does the power supplies. On the 2U model of that Proliant there are 6 fans, and all that airflow need to go somewhere. There's a good reason why Office server rooms are sealed, air conditioned rooms (which has its own power requirements) with noise abatement tiles all over. You will also read about how STH forum members build their own server rooms in their homes, complete with moisture sensors, UPSes, extra power circuits, AC (or forced ventilation cooling), locks and all that stuff.

f) Are you tossing good money into bad?
That’s an American saying, but it resolves around the idea of making a bad investment and then sinking further money into it in an attempt to recoup the loss. For example, those Ivy Xeons off the Proliant DL380 Gen8 are from 2012, which means that in all cases they have already went through the usual 3 to 5 year depreciation cycles of corporate environments and are, in all sense of the word...worthless (at least to any one using them to earn money). Unless it was sold to you at fire-sale pricing (or as a free hand-me-down) you will still have to pay for shipping. And when you factor in discounts, clearance and sales, it’s often not worth it.

For the homelabbers, you will have to buy DDR3 memory for them, which is fine, except that you cannot carry them forward to the next machine you have (because they will all likely be DDR4 based). Then there is the question of whether the chassis will support the features you want to learn. Can those old chassis support PCIe lane bifurcation for NVMe? What about lights-out management? Is it worth it to pay actual money to HPe for an iLO license to access your 8 year old server? Then of course there’s the question of functional obsolescence - look at the belly aching that resulted when VMWare released VSphere 7 and support for some rather old (and some rather new, in the case of Realtek NIC) hardware were dropped. Red Hat already talked about dropping support for anything prior to Haswell-E on their next release.

The thing about server hardware in general is that there are big ones, and there are small ones. They all play a role in the ecosystem - but not all have the best interests of homelabbers in mind. Some have the peculiar mix of modern tech, good expendability, efficiency and quiet that makes them excellent for home use, and some are also inexplicably cheap.

Some of us are content with a Proliant Microserver Gen7/8/10/10+, some of us went for the SuperMicro SYS-5028D-TN4T (a good machine but the prices didn’t really drop much in the past 4 years). Some of us picked up Proliant EC200as, (a bit limiting in my opinion) or we got Corporate NUCs (project TinyMicroMini) to cluster machines up. Hell, I bought Dell Wyse/HP thin clients to act as server hardware, and that actually worked out quite well, but then, I have specific needs and constraints.

The question really is...do you know what you really need, and what you are willing to compromise upon? How much electric power can you afford per month, how much power can you safely consume, what do you want to do with it, and what is your plan to maintain it? Can't really talk war without first talking about logistics, and you can't talk about what to buy if you can't nail down what you can live with.
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
If you are interested in running additional things at your home, possibly some media and/or home automation, I'd suggest you take a look at running Docker containers right on your Synology. DiskStation Manager - Knowledge Base | Synology Inc.
If you need to add more drives, I'd check if your home Synology model supports expansions. - Expansion Units | Synology Inc.

Getting into servers is usually expensive (in terms of electric usage) proposition and if your goal is to learn more about IT, then I highly recommend you to instead learn how to do IT on public clouds - this is where IT is going in few years anyhow, instead of learning a skillset which quickly becoming obsolete outside of datacenters (before anyone jumps in and accuses me of heresy, keep in mind that I'm in IT professionally for over 20 years)
The cloud is just someone else's data center.
 
  • Like
Reactions: awedio

WANg

Well-Known Member
Jun 10, 2018
1,308
971
113
46
New York, NY
The cloud is just someone else's data center.
Yes. One you never had to visit, fret over it’s cage layout, figure out the power distribution, price, procure and rack the servers, figure out how to wire it up from the demarc to the distribution switches, track work/visit logs for compliance reasons, or indeed schedule weekend work visits over. Instead I just had to summon them via a few lines off an API and get the orchestration going. You make it sound as if it’s a bad thing.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
Yes. One you never had to visit, fret over it’s cage layout, figure out the power distribution, price, procure and rack the servers, figure out how to wire it up from the demarc to the distribution switches, track work/visit logs for compliance reasons, or indeed schedule weekend work visits over. Instead I just had to summon them via a few lines off an API and get the orchestration going. You make it sound as if it’s a bad thing.
agreed on all points. I'd also add that existing whole heap of management systems, SaaS and PaaS solutions.
There is a vast difference between virtualization and cloud, regardless of on-prem or on the cloud.
 

WANg

Well-Known Member
Jun 10, 2018
1,308
971
113
46
New York, NY
DIY - https://forums.servethehome.com/ind...hs-from-a-supermicro-c2758.29750/#post-275789

Low Power, 8 Drive Capacity, can run hyperVisor or bareMetal, TINY FOOT Print.

I've struggled over the years going big nas, small nas, all in one, back to nas, etc.. I'm settling back into the tiny foot print, great power consumption, good performance, mnimal cost vs. synology, etc...

Build another server for VMs.
Yeah, but then the question is whether you are comfortable being the first line of support for any issues that might pop up with the machine, some of which are design/hardware based and cannot be easily worked around. Customization is great, but being the first of a kind means that you have to own whatever problem encountered.

Sometimes you might deal with a slow vendor who might not be as up-front about support as the big three (example: Look at the debacle behind the Gigabyte Brix i5-5775...which they never fixed). That's a NUC class machine that was used in some home labs.

That's an argument @Patrick made in favor of the TinyMicroMini machines as some are B-stock machines with fully intact 3 year NBD/on-site warranties, as are some outlet deals from large vendors like Insight or CDW (like this Proliant MSG10 for 277 USD - sure its CPU is a bit weak but stand it with a boot SSD and 4 drives, it's pretty compelling as a 10/40Gbit TrueNAS Core iSCSI target. Hell, this MSG10+ was going for 415 earlier this week until CDW discontinued it). That being said, the big 3 are often not that much better.
HPe is still clueless enough not to address issues with SRIOV on the MSG10 Plus, and you are still expected to pay HPe to get a fix (if a fix ever appears)...
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
Yeah, but then the question is whether you are comfortable being the first line of support for any issues that might pop up with the machine, some of which are design/hardware based and cannot be easily worked around. Customization is great, but being the first of a kind means that you have to own whatever problem encountered.

Sometimes you might deal with a slow vendor who might not be as up-front about support as the big three (example: Look at the debacle behind the Gigabyte Brix i5-5775...which they never fixed). That's a NUC class machine that was used in some home labs.

That's an argument @Patrick made in favor of the TinyMicroMini machines as some are B-stock machines with fully intact 3 year NBD/on-site warranties, as are some outlet deals from large vendors like Insight or CDW (like this Proliant MSG10 for 277 USD - sure its CPU is a bit weak but stand it with a boot SSD and 4 drives, it's pretty compelling as a 10/40Gbit TrueNAS Core iSCSI target. Hell, this MSG10+ was going for 415 earlier this week until CDW discontinued it). That being said, the big 3 are often not that much better.
HPe is still clueless enough not to address issues with SRIOV on the MSG10 Plus, and you are still expected to pay HPe to get a fix (if a fix ever appears)...
How is being the "first line of support" relevant for a home NAS someone is learning with, I don't believe it is a factor at all.

The price saving of what I proposed over Enterprise gear or pre-made you could have spares on hand if you wanted, or just order from AMAZON with 2-5 day delivery to replace the part because this isn't years old tech, it's tech from this year\last year (2020\2019).

Let us also not forget most failures of new hardware occur within 30-90 days of being put into service... all that enterprise used gear a lot of us run 24\7 for years already ran 24\7 for 3-5 years too... failures shouldn't be an issue for most. I'd be more concerned about running RZ2 or RZ3 for home\important documents to account for spinning HDD Failure.

I stand by my suggestion for either baremetal NAS or starter virtualization... you can't really do better for the price from what I found.
Unless you want to run higher power E5 V3 \ X10 motherboard in which case you can spend about the same and get more multithreaded performance, and more RAM available but at the expense of 3x the idle power usage. BUt if you need more RAM and\or PCIE this is the best way to go too. (The motherboard\CPUs I mentioned in another thread I'm selling were my ex-home NAS\AIO and at one time I had 8x Spinners, 10x+ SSD (SATA & SAS) and 2x NVME... it could DO IT ALL but it used a LOT of idle power.)

I purchased some other new hardware E-21xx + Motherboard from another member here to run my home Proxmox\VMs, and running that server + my home baremetal NAS is still lower power than my previous E5 AIO.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Yes. One you never had to visit, fret over it’s cage layout, figure out the power distribution, price, procure and rack the servers, figure out how to wire it up from the demarc to the distribution switches, track work/visit logs for compliance reasons, or indeed schedule weekend work visits over. Instead I just had to summon them via a few lines off an API and get the orchestration going. You make it sound as if it’s a bad thing.
I don't make anything sound like anything.

I simply made the observation that "the cloud" lives in a datacenter, so the skills required for data center management are not becoming obselete any more than the datacenters in which the cloud providers host the cloud resources are. Self hosting is becoming obselete because it's a poor value proposition, but the datacenters themselves aren't going anywhere - they're just being managed by other organizations, which will still require the skillsets that those of us who have been working in data centers for the last 2-3 decades have acquired.
 
  • Like
Reactions: T_Minus

TRACKER

Active Member
Jan 14, 2019
178
54
28
...and how is it a “bad thing”? A one-line generalized statement is not the same thing as a well-reasoned argument.
Sorry for the off topic.
Yes, it is so simple. It has always been.
Just someone else is owning your data. That's it.
I personally don't feel comfortable someone else to keep my data and i should be depended on his "cloud".
 
  • Like
Reactions: kaefers and T_Minus

WANg

Well-Known Member
Jun 10, 2018
1,308
971
113
46
New York, NY
Sorry for the off topic.
Yes, it is so simple. It has always been.
Just someone else is owning your data. That's it.
I personally don't feel comfortable someone else to keep my data and i should be depended on his "cloud".
Yeah, but that’s an over-simplification while sidestepping all the inherent issues with self-hosting. The whole idea of running your tasks on someone else' computer is not new, and that's always been the case since the days of the mainframe timeshare from "back in the days". Besides, why would renting a dedicated space to keep your data make it inherently more trustworthy?

When you handle colocation yourself - You are trusting the security at your colocation facility to do its job and make sure the right people are let in and everyone else to be sent away. You are trusting the environmental controls and power distribution at your DC to be well maintained and functional, you are trusting your hardware partners to do their job to get parts dispatched (sometimes before a failure, sometimes immediately after a failure), you are trying your DC to maintain cross-connects to the meet-me room, and you are trusting your networking providers to maintain their uptime, and on top of that, you are expecting all those vendors to not be sketchy...You are also trusting your own accounts payable department to keep tabs on all those vendor relationships and to make sure everyone is paid on time and in good standing, and if anyone/anything falls down in that chain of trust, you better be able to react quick.

Does all this make me feel safer about the contents of my cage simply because of some nebulous definition of ownership (which is defined by a legally enforced SLA anyways)? No...but not more than staying up at night worrying about a massive incident knocking out an EC2 availability region that you have instances running. On one hand I might maintain a few cages and sign-off on the coloc fees, the cross-connects, the fiber, and the SAN inside the cage. On the other hand, I have the name of an accounts rep at Amazon/DigitalOcean/Azure/whatever who would no doubt scream at his own people to make sure I am satisfied (at least to the letter of the SLA).
At the very end of the day, why would you trust the cloud any more or less than the already established chain of trust that you'll need to get things done when run your own cage? In both cases, all you are doing is relying on someone else's economy of scale and the strength of a legally enforced SLA.

Note: the argument isn't any better when you are colocating inside your own home. You have to scope/create/man physical security, you'll need to pay the utilities for redundant feeds, you'll need to maintain your own UPS, HVAC, pumps and environmental controls, you'll need to pay extra for professional grade connectivity (and redundant ones), you'll have to pay your own rent/mortgage/property taxes, and you'll have to pay vendors to send parts to you when you need it....and if you expect this to be used to host a revenue generating business you'll have to face down inspections and audits. The idea that all this is worth your "ownership" rings hollow when bills and responsibilities pile up.
 
Last edited:
  • Like
Reactions: Tha_14

WANg

Well-Known Member
Jun 10, 2018
1,308
971
113
46
New York, NY
I don't make anything sound like anything.

I simply made the observation that "the cloud" lives in a datacenter, so the skills required for data center management are not becoming obselete any more than the datacenters in which the cloud providers host the cloud resources are. Self hosting is becoming obselete because it's a poor value proposition, but the datacenters themselves aren't going anywhere - they're just being managed by other organizations, which will still require the skillsets that those of us who have been working in data centers for the last 2-3 decades have acquired.
Well, I don't entirely buy the argument that everything will be at "the cloud", either. There are custom workloads that require someone to go to a cage, rack custom servers and setup your own cabling/storage/whatever. Sometimes generic templates are just not versatile enough (just look at stock exchanges running data centers and trading firms racking cages and rooms there - can't do that with Azure or EC2 yet...).

The best skillset is usually one with a healthy mix of both being able to script nodes, and being able to rack servers. Besides, the company that you work for can build a private cloud at its own data center with well defined server templates that can orchestrated, so "the cloud is someone else's server" can simply be "the cloud is simply your servers, networking and storage gear managed by orchestration and generalized for the tasks you devise".
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
There's a point, IMO where medium+ sized businesses can benefit (financially\+) from "self" hosted ie: their own racks.
Or as mentioned if you're doing a lot of high performance or GPU work it's absurdly more expensive to pay "cloud", not that cloud storage or compute isn't multiples of self hosted. ANtyway, yeah, worth another topic discussion.
 

WANg

Well-Known Member
Jun 10, 2018
1,308
971
113
46
New York, NY
How is being the "first line of support" relevant for a home NAS someone is learning with, I don't believe it is a factor at all.

The price saving of what I proposed over Enterprise gear or pre-made you could have spares on hand if you wanted, or just order from AMAZON with 2-5 day delivery to replace the part because this isn't years old tech, it's tech from this year\last year (2020\2019).

Let us also not forget most failures of new hardware occur within 30-90 days of being put into service... all that enterprise used gear a lot of us run 24\7 for years already ran 24\7 for 3-5 years too... failures shouldn't be an issue for most. I'd be more concerned about running RZ2 or RZ3 for home\important documents to account for spinning HDD Failure.

I stand by my suggestion for either baremetal NAS or starter virtualization... you can't really do better for the price from what I found.
Unless you want to run higher power E5 V3 \ X10 motherboard in which case you can spend about the same and get more multithreaded performance, and more RAM available but at the expense of 3x the idle power usage. BUt if you need more RAM and\or PCIE this is the best way to go too. (The motherboard\CPUs I mentioned in another thread I'm selling were my ex-home NAS\AIO and at one time I had 8x Spinners, 10x+ SSD (SATA & SAS) and 2x NVME... it could DO IT ALL but it used a LOT of idle power.)

I purchased some other new hardware E-21xx + Motherboard from another member here to run my home Proxmox\VMs, and running that server + my home baremetal NAS is still lower power than my previous E5 AIO.
It's relevant if this is your first non turn-key server, and you are trying to figure out which direction you wish to go. At the end of the day someone like @meles meles (I am guessing English is not his native language and he's likely located overseas - (guessing it's a he...Meles Zenawi was the name of a well known Ethiopian politician) ) is trying to replace a Synology (not sure if it's one, two, 4 or 6 bays) which is relatively user friendly, GUI based with retail end user support...with something he wants to build or buy. The question here is if he should buy something (might not be a turnkey solution but rather a brand-name machine with warranty support), or build something. That's dependent on his skills and how comfortable he is with it, whether his local conditions allow him to price it out/run it, and how much patience or spare time he has. That's why not everyone is "build-build-build" instead of "buy-buy-buy".

There are skills that need to be learned when it comes to rolling your own NAS, like how to make arrays/volume groups, how to enable, define and maintain file shares, how to allocate extents, and how to troubleshoot if the performance needs to be tuned (and all that). For some people having to deal with hardware problems before the software configuration might be just one bridge too many.

Somewhere in my head, while shopping for a replacement for my Proliant MSG7 N40L (my current FreeNAS 11 machine), I wish that there is a mass produced, quiet 4 to 6 bay server with decent specs (Xeon-D 1500 series or similar) which can be wired up for 10/40GbE networking, preferably one that was produced a few years ago and entering the secondary market at good rates - Something like an HPe EC200A with the disk expansion module, a SYS5028D-TN4T (still too expensive) or something like an HPe Edgeline E1000 SFP+ passthrough with a Proliant m510 moonshot cart (although the EL1000 doesn't have much storage so we are back to square one). I might go with a pre-owned Proliant MSG10. Sure, it's not that fast, but if it's cheap enough on the secondary market, it's still a decent 5-bay machine with a single PCIe x16 slot that will last another few years running TrueNAS Core or whatever. One thing that I do agree with you is not spending money on hardware that is too old (that DL380G8 is definitely a good example of that). That's just not worth it.

The thing to note here is that when it comes to NASes, you might find that "growing" your array over time not to work (example: Start with RAID1 across 2 drives, then RAID10 across 4, but then you might want to try RAID6 across all the drives, in which case your original array might need to be destroyed to make that happen...if this device is a 4 to 6-bay).
 
Last edited:
  • Like
Reactions: Rand__

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
A lot of people I think vastly over estimate how much usage they need especially CPU, for home I am for sure in the camp of buy something new, small, quiet, Low power consumption.
I don’t agree that anything at home is disappearing and it’s all going to cloud, even if you use say O365, how do you backup that data ? Answer a backup stored at your place which is then offsite of the cloud. What about camera footage for a home or small business ?

i see a lot less hype about Synology and qnap etc these days but I don’t know that means they sell any less just that for people who need it there is a lot less discussion, it’s become utility if you could say that.

One or 2 small modern systems is the way to go I think. commercial from HPE/Dell/Lenovo etc or make your own from Supermicro/ASRockRack etc, what ever you prefer. If you out grow that then at least you haven’t wasted money, just re-use your old stuff for something else like backup or testing.
 
  • Like
Reactions: Techspin

meles meles

New Member
Jul 28, 2020
21
0
1
"At the end of the day someone like @meles meles (I am guessing English is not his native language and he's likely located overseas - (guessing it's a he...Meles Zenawi was the name of a well known Ethiopian politician) ) is trying to replace a Synology (not sure if it's one, two, 4 or 6 bays) which is relatively user friendly, GUI based with retail end user support...with something he wants to build or buy. The question here is if he should buy something (might not be a turnkey solution but rather a brand-name machine with warranty support), or build something. That's dependent on his skills and how comfortable he is with it, whether his local conditions allow him to price it out/run it, and how much patience or spare time he has. That's why not everyone is "build-build-build" instead of "buy-buy-buy". "

meles meles is our Sunday Best name, ooman ! It's Latin, but most people call us "Badger" in English. This server / NAS will be part of our local settwork, linked to our intersett, supplementing but not replacing a Synology DS418play.


*Small brain smoulders*
 
Last edited:

meles meles

New Member
Jul 28, 2020
21
0
1
Well, following on from all the good advice received above, we ditched the idea of resurrecting an old enterprise server and set out on the 'roll your own' route. We're almost there.

Hardware comprises:

The components were chosen because they were available to us and either cheap or free rather than as a result of some great plan.

Assembly of the hardware was easy enough with the first snag only arising when we came to boot up for the first time. The motherboard uses UEFI rather than BIOS, something we consider e retrograde step. Under BIOS it was easy enough to assign the desired boot sequence to the appropriate drives but we always struggle with UEFI. Eventually, after much cursing of whoever invented the UEFI, we managed to get it to recognise the USB port as first boot device and installed FreeNAS onto a 120 Gb Kingston SDD. That's the smallest drive we had to hand and so was selected to host the OS. Installation of FreeNAS seemed to proceed exactly as it should (or at least exactly in line with the You Tube tutorial) and so we rebooted the hardware to bring the server online.

Eventually, after much further struggling with UEFI , we got it to ignore the USB port and boot from the Kingston SSD. Did we mention how much we prefer the old BIOS ? Back at our main pc, we opened up a web browser and called up the FreeNAS log in screen. User name and password were input and FreeNAS opened up for us.

Now the learning begins...