Synology vs DIY NAS for home?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
Separate compute and storage: let the NAS focus on just file serving, and have a separate compute box (TMM/uSFF, old desktop, rack server, what have you) run the heavy containers. NFS or iSCSI.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
Separate compute and storage: let the NAS focus on just file serving, and have a separate compute box (TMM/uSFF, old desktop, rack server, what have you) run the heavy containers. NFS or iSCSI.
been there, done that. Boring and not efficient. Two boxes using 10w each is less efficient than one using 15w and doing the job of two (numbers are to illustrate the point). Everyone and their uncle are consolidating compute and storage from hyper-converged to DPU and CXL memory cards. Keeping storage and compute separate - while it's indeed easy to manage and reliable- is inefficient and old. How do you suppose the cloud provides work? Do you think they have large EMC/Netapp arrays in their data centers?

I ran before an 18 disks DIY Freenas and 3 nodes all-flash vsan - junked the lot, including a 10gig brocade switch which was pretty much required for the vsan cluster. Replaced the lot with a single (but beefed up) NAS and saved nearly $100 on the electrical bills.
 
Last edited:

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
been there, done that. Boring and not efficient. Two boxes using 10w each is less efficient than one using 15w and doing the job of two (numbers are to illustrate the point). Everyone and their uncle are consolidating compute and storage from hyper-converged to DPU and CXL memory cards. Keeping storage and compute separate - while it's indeed easy to manage and reliable- is inefficient and old. How do you suppose the cloud provides work? Do you think they have large EMC/Netapp arrays in their data centers?
I agree. I’m pretty excited for TrueNAS Scale, as I’m trying to stop myself from making major changes to my home setup until I can do proper planning to consolidate as much stuff as possible on the fewest amount of machines. Until I figure things out I probably will have a go at consolidating physical machines onto a single beefy ESXi or Proxmox server. I actually had made the decision to continue with ESXi rather than switch to Proxmox, but with the impending sale of ESXi I’m not so sure anymore. ESXi currently has more relevant translatable skills though.

Actually have you run or played around with Synology units before? I’ve only ever done DIY FreeNAS and recently, Synology, so if you have insight on QNAP being better for running containers and VMs I’d love to hear. I was pretty close to buying a TVS-872XT as I had been eyeing it for a while, but when QNAP releases their ZFS implantation I was interested in that. I would’ve preferred a traditional layout maximizing 3.5” drives rather than the split in drive sizes and interfaces of their newer high end units though.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
Yes, I agree for the general case. Even my tiny homelab is HCI, mostly 2011-0 and 2011-3; all nodes have at least some local storage.

But OP is coming from a single Syno struggling to run a few dockers; the expressed use-case is primarily media storage and Plex. I am not advising OP to implement a SAN. A power-efficient all-in-one box could be done with a desktop Skylake or newer board, or a more expensive C246 or newer server board, and a CPU with QSV. Or cheaply and efficiently using a separate QSV TMM. And of course if they have the GPU already and don't mind the power/cooling, NVENC is fine, too; plenty of ways to do it.

Even for the big providers, despite management / provisioning being consolidated for simplicity, some SKUs are more appropriate for filers, some for transactional DBs, some for GPGPU, etc. Yes, purely diskless nodes are becoming rare, but specialisation of roles will always have value.
 
  • Like
Reactions: BoredSysadmin

Logan

Member
Feb 22, 2017
65
9
8
To provide a little closure on my original post: I went with ODROID HC4. I'll use btrfs and maybe urbackup. I also considered these 4- and 8-bay cases for DIY, but I'm not sure I'll need more than 2 bays for a while, so it doesn't make much sense for me. I believe these are manufactured by Innovision. Note that the 8-bay model can fit a micro ATX motherboard.



I couldn't find a cheap small hotswap chassis like these that fit both a micro ATX motherboard and a normal size ATX power supply, which would provide a lot more DIY options at a low price.

Thanks for your assistance.
 
  • Like
Reactions: dandanio

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Yeah we kind of derailed the thread lol, sorry about that mate.

To bring it back down to simpler solutions, I just feel that the ODROID HC4 won’t have much support among distros, plus the exposed nature of the drives gives me pause as a simple accidental bump might knock over the drives. OMV is supported through and that’s a decent/simple distro to use. Otherwise you’d have to roll your own implementation, which vastly complicates administration. Others may feel differently, but in my mind, not great for a NAS which should be fairly simple and reliable to use.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Yes, I agree for the general case. Even my tiny homelab is HCI, mostly 2011-0 and 2011-3; all nodes have at least some local storage.

But OP is coming from a single Syno struggling to run a few dockers; the expressed use-case is primarily media storage and Plex. I am not advising OP to implement a SAN. A power-efficient all-in-one box could be done with a desktop Skylake or newer board, or a more expensive C246 or newer server board, and a CPU with QSV. Or cheaply and efficiently using a separate QSV TMM. And of course if they have the GPU already and don't mind the power/cooling, NVENC is fine, too; plenty of ways to do it.

Even for the big providers, despite management / provisioning being consolidated for simplicity, some SKUs are more appropriate for filers, some for transactional DBs, some for GPGPU, etc. Yes, purely diskless nodes are becoming rare, but specialisation of roles will always have value.
OP probably isn’t using a lower end Synology at all. In our thread derailment (I’m to blame as well), we overlooked the OP’s actual stated requirements.

Going back to the original requirements, for a simple NAS, without need for more than 5-6 drives, ZFS hardly makes sense. RAID-6 may not even make sense as well since there will be too few drives to make the capacity penalty worth it. For this I would go with a lower end x86 dual bay NAS with mirroring, or a 4-6 bay NAS with RAID-5/6 if capacity growth is desired. Personally I stay away from expansion units as they are expensive, and it usually makes more sense to buy a second NAS instead for a little more.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
OP probably isn’t using a lower end Synology at all. In our thread derailment (I’m to blame as well), we overlooked the OP’s actual stated requirements.

Going back to the original requirements, for a simple NAS, without need for more than 5-6 drives, ZFS hardly makes sense. RAID-6 may not even make sense as well since there will be too few drives to make the capacity penalty worth it. For this I would go with a lower end x86 dual bay NAS with mirroring, or a 4-6 bay NAS with RAID-5/6 if capacity growth is desired. Personally I stay away from expansion units as they are expensive, and it usually makes more sense to buy a second NAS instead for a little more.
OP mentioned photo storage, and I assume interest in btrfs and ZFS is due to journaling/bit rot prevention. I agree that two drive units like the one linked above should do the trick with simple mirroring for drive resiliency. I disagree on raid 5/6 for 4 drive units, you really shouldn't use raid-5 with modern larger drives due to the chance of failure, and 6 is unnecessarily complicated for such a small system. My 5c is to go with raid 10. It would be a much faster system with the same protection. ZFS is probably not the best fit here, but even if btrfs is still a fairly young file system and not very mature, I assume there is a reason why Synology decided to use in production.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
OP mentioned photo storage, and I assume interest in btrfs and ZFS is due to journaling/bit rot prevention. I agree that two drive units like the one linked above should do the trick with simple mirroring for drive resiliency. I disagree on raid 5/6 for 4 drive units, you really shouldn't use raid-5 with modern larger drives due to the chance of failure, and 6 is unnecessarily complicated for such a small system. My 5c is to go with raid 10. It would be a much faster system with the same protection. ZFS is probably not the best fit here, but even if btrfs is still a fairly young file system and not very mature, I assume there is a reason why Synology decided to use in production.
I'm tired hah. You're absolutely right on 4 drive NAS should go with RAID-10 rather than RAID-5 if high capacity drives are used due to increased chance of failure of another drive during rebuild

AFAIK Synology uses their own implementation of BTRFS, supposedly with proprietary patches to address the shortcomings.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
you really shouldn't use raid-5 with modern larger drives due to the chance of failure,
care to elaborate?
even with worse settings (compared to 'real world estimates') with the mttdl model it would take ~92 years til data loss/catastrophic failure
setup:
4 disks
22tb each
4kb sector size
1 volume
5mb/s rebuild speed
raid.PNG
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Tbh RAID is more of a way to keep the system running while the failure is resolved so the services don’t go down. While it’s nice to not bring the whole system and associated services offline to restore from backup, there’s always a chance more drives fail during rebuild. It’s happened to me once before in my homelab, and I’ve seen it happen occasionally in client datacenters back in the day when clients ran their own datacenters.

Even running RAID-6, I would be extremely nervous if I needed to rebuild and didn’t have a backup of the array’s data. RAID-10 is probably the best blend of speed and resiliency, however there are scenarios where it would make more sense to use RAID-6 for the increased capacity.
 
  • Like
Reactions: Sean Ho

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
care to elaborate?
even with worse settings (compared to 'real world estimates') with the mttdl model it would take ~92 years til data loss/catastrophic failure
setup:
4 disks
22tb each
4kb sector size
1 volume
5mb/s rebuild speed

or if you like statistics:
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Did you read the follow-up article and one from digistor?
I did and the numbers from digistor don't make sense to me. I had some of the setups (4x 3tb and 8x 6tb) for ~5 years running and rebuild them multiple times successfully. According to the numbers from digistor that should be very unlikely (48.7% for the 4x3tb and 3.48% for the 8x6tb setup)...
I tried to lookup some other, 'more reliable' sources and they are hard to find.
 
  • Like
Reactions: TRACKER

dandanio

Active Member
Oct 10, 2017
182
70
28
To provide a little closure on my original post: I went with ODROID HC4. I'll use btrfs and maybe urbackup. [...]

Thanks for your assistance.
You are welcome.You will be very happy with your solution for what you needed it for.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
If the ODROID with two spinners meets your needs, then so much the better! btrfs is certainly very flexible with drive sizes and rebalancing. With two drives, I assume you're using btrfs raid1. If/when the time comes to move to raid5/6 or raid10, be aware of btrfs' differences from hardware raid, mdadm, or zfs. In particular, btrfs raid10 does not give you the ease of recovery that a normal raid10 offers.

Reviewing this thread, I express my apologies; I think I had mixed up OP with a different poster, who was coming from a Syno and wanted to run Plex and other containers.

My understanding is SHR is simply single-dev btrfs on top of mdadm; no special sauce. My info could be out of date, though.
 
  • Like
Reactions: ReturnedSword