Home server architecture thoughts?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

matt_garman

Active Member
Feb 7, 2011
230
68
28
In this thread, I posted pics of my recently-rebuilt home server. Now that the hardware part is done (maybe?), I'm thinking it's time to revisit the OS/software side. So I'm soliciting feedback.

I don't think anything is really broken, but mostly non-optimal due to having grown up organically. Framing it in terms of my goals, it boils down to two main factors simplicity and ease of backup/restore (and a related goal of data integrity/fear of bit-rot).

Here are the roles of this server:
  • MythTV backend. Single HDHomeRun tuner right now, maybe two some day.
  • Zoneminder PVR for two, soon to be three, high-resolution cameras doing motion detection.
  • Media storage/NAS. Who on this forum doesn't have a big media storage requirement? :)
  • Video transcoding via Handbrake.
  • Hobby software development (lightweight fun stuff when time permits).

So here's how I have this set up now:
  • Core: Xeon D 1541) w/32 GB RAM. More than adequate for current or anticipated needs.
  • Crucial m.2 mx300 1TB SSD + Intel 320 300GB in Linux software RAID-1 for OS (yes, huge size discrepancy, will upgrade the 300GB soon). I also use this mirror for my personal home directory. It houses the MythTV backend and Ubuntu-Handbrake containers (both via systemd-nspawn). The main OS is running the sole MariaDB and Apache instances, shared by both Zoneminder and MythTV. Once the other drive is upped to 1TB or so, I should have plenty of space for these purposes.
  • Dual Hitachi HE8 8TB drives for media and other storage. Another Linux software RAID-1 setup.
  • Single Crucial 2.5" SATA mx300 1TB SSD. This houses my Zoneminder container (via systemd-nspawn). This arose from previously being on a separate system.
  • One WD RED 6TB disk as a "catchall" for temporary storage. Mostly this is MythTV recordings. This data falls in the "annoying but OK to lose, not backed up" category.
  • Except for the Ubuntu container, all OSes are CentOS 7.x.
  • All filesystems are xfs.
  • Backups: first is a secondary system with a big ZFS raidz3 store, custom backup script (basically just an rsync wrapper). Second is a CrashPlan subscription.

Not too bad, but kind of a patchwork, and I think it could be somewhat cleaner. Came about via organic growth.

A bit of backstory: several years ago, specifically, before I had kids, I had a lot more time. So I didn't mind (in fact I enjoyed) having multiple systems, working with "rolling" Linux distributions, building custom packages, tweaking things incessantly. These days I have far less "play time" for home infrastructure than I used to. So I'm always trying to think of how I can make things simpler. Simpler saves time, but I also believe it reduces the frequency of issues. So one of my first steps was to standardize on CentOS. Not because it's fun or sexy, but it's what I use at work; I already have a lot of "thought capital" invested in it.

You can see from the above that I'm making use of containers, specifically systemd-nspawn. This came about somewhat organically. I used to have all this stuff under the main host (bare metal) OS. But then any kind of upgrade became a nightmare with package collisions ("RPM hell"). This is mostly due to MythTV, which has a trillion dependencies that sometimes conflict with the base packages. A VM felt too heavy/overkill for this application, and I finally got up to speed with containers. Specifically, the (original?) container, chroot. I used this happily under CentOS 6. And then I found chroot didn't really work under CentOS 7, and that's what turned me on to systemd-nspawn, aka "chroot on steroids".

I've been lurking around this forum more than usual lately, and I get the sense that if I was doing this from scratch, something like Proxmox VE would probably be the suggestion. And I've done some reading on it, and it looks pretty slick. But in my mind I'm weighing "the devil I know" (CentOS) versus the one I don't (Proxmox). And fundamentally, is there anything I can't really do with CentOS that I can do with Proxmox?

Here's what I'm thinking so far as a way to re-architect this:
  • Base CentOS 7.x system does only NAS duty, monitoring, and container storage. Maybe also the system I log in to for my development and other playing.
  • MythTV container: will have own MariaDB and HTTP instances. Originally I thought it would be more efficient to have a single centralized instance of those services. But then I got to thinking, what if one day ZM and MythTV have conflicting dependency requirements? Or I want to add a new service that conflicts with either existing one? Systemd-nspawn supports virtual networking, so my new mode of thinking is to make every service a "lightweight VM" using systemd-nspawn.
  • Zoneminder container. Similar thoughts as MythTV container.
  • Anything new that comes along: throw it in its own container, unless it's simple and part of the stock OS.
  • Ubuntu/Handbrake container. This probably can use shared networking as it currently does. More like chroot than a VM.
  • Backups? What I have now is "OK", but I think it can be better. Also, I haven't addresses the data integrity/bitrot problem.

Seems pretty straightforward, and not too terribly different than what I have now. So first question: does this architecture make sense? Second question, how best to organize storage for all this? Specifically:
  • Reading threads here, I get the sense most people are going the lightly used enterprise SSD route over cheap new consumer SSDs. Should I grab (for example) an 800GB S3700 from ebay for the system drive I need to upgrade? I'm still within my return period (sans 15% restocking fee) for that MX300 M.2 drive, should I return it and get a used enterprise drive?
  • Filesystem: Linux software raid (md) and xfs/ext4 may not be exciting, but they are time-tested and battle proven. Lots of love here (and other places) for ZFS; I use it for my backup server and think its feature-set is impressive. But, see here for one opposing view (summary: "stable but not production ready"). And it still seems somewhat duct-taped together under Linux. But with the zfs send/receive functionality, I feel like backups become better and simplified. Btrfs seems perpetually stuck in the "only 1-2 years away from production ready". With these next-gen filesystems, am I trading one set of problems for another? Or is it a net win?
  • What is the prevalence of bit-rot anyway? I've never experienced it, but then again, how would I know? But spinning drives already have block-level checksums built in, I assume SSDs do as well. It's a read error if the checksum isn't right, so there's already some integrity protections by default.

Lastly, in terms of storage capacity, I'm mostly OK, but the distribution between SSD, spinning disk, and what is mirrored/not-mirrored is sub-optimal. Ideally I'd like to have something like this:
  • 250 GB mirrored SSDs: base system + containers
  • 400 GB mirrored SSD for Zoneminder events
  • 300 GB mirrored SSD for home directory
  • at least 5TB single spinning rust for MythTV recordings and random non-important stuff
  • at least 8TB mirrored spinning rust for media storage and Zoneminder archives

I'm mostly there, but clearly need to re-org the SSDs. And I want to do that without breaking the bank! Oh, and I'm limited to six total SATA devices (want to do this exclusively with motherboard SATA ports).

If you managed to read through all that, my thanks! (Or maybe condolences?) If you have an opinion on how to best approach this, I'm happy to hear it.
 

CJRoss

Member
May 31, 2017
98
6
8
Have you considered doing Plex or Emby instead of MythTV? They both have DVR functionality now. I've been using Plex DVR for quite a while and it's been working great.
 

PigLover

Moderator
Jan 26, 2011
3,215
1,574
113
@matt_garman - What Proxmox gives you over raw Centos is 'simplicity and ease of backup/restore', which appear to be your stated goals.

It seems you are resisting the obvious answer to your own (TLDR) question.

Sent from my SM-G950U using Tapatalk
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I'm sure you have seen the numerous posts regarding enterprise ssd vs consumer ssd. I have always thought that for a Low use home server it won't matter. But when running more than one VM off a especially during heavy writes- in my case a really big Sab queue, all vms slowed down to a crawl. Everytime I started sab, the same thing happened and I ended up just restoring an older snapshot just to get rid of the queue. Every time there was an unpack in sab, the system would slowdown and my plex streams will start to stutter.

The issue went away once I replaced the mx 300 750gb with a s3700. Not sure if there was anything else at play here but all I did was add the ssd and move the vms to the new datastore and everything stabilized.
 
  • Like
Reactions: T_Minus

Evan

Well-Known Member
Jan 6, 2016
3,346
601
113
@K D yep that will be it, even Low end enterprise drives like Samsung pm863 do way better than consumer drives. Intel seem to be better again but in that case not so much in it, after you use enterprise drives going back to consumer is no fun if in anything other than a workstation.
 

matt_garman

Active Member
Feb 7, 2011
230
68
28
Have you considered doing Plex or Emby instead of MythTV? They both have DVR functionality now. I've been using Plex DVR for quite a while and it's been working great.
I didn't even know Plex had DVR capabilities until you mentioned it... I probably won't switch any time soon, but since you mentioned it, I did a little light reading about it, and I might have to put Plex on my server to augment MythTV (as opposed to replacing it). Another thing that falls into the "devil you know" category. MythTV has plenty of warts, but after using Myth for over a decade, I'm pretty familiar with them. But I have been contemplating replacing my Myth frontend with Kodi. Seems like Myth + Plex + Kodi would be be pretty slick (at the expense of more complexity).


@matt_garman - What Proxmox gives you over raw Centos is 'simplicity and ease of backup/restore', which appear to be your stated goals.

It seems you are resisting the obvious answer to your own (TLDR) question.
I'm warming to the idea for sure. I'll have to find some time to do a test install of Proxmox to get a feel for it.

How would you suggest I lay out everything if I went the Proxmox route? Specifically, the root FS for Proxmox itself, and also NAS aspect. Should I get like a single smallish SATA DOM for the base Proxmox install, then put all the real meat stuff on mirrored SSDs and spinners? Or should Proxmox live on a partition on the mirrored SSDs? And would the NAS be it's own VM (e.g. FreeNAS), or would the hypervisor itself do NAS duties (as I would do if going the CentOS route)?


I'm sure you have seen the numerous posts regarding enterprise ssd vs consumer ssd. I have always thought that for a Low use home server it won't matter. But when running more than one VM off a especially during heavy writes- in my case a really big Sab queue, all vms slowed down to a crawl. Everytime I started sab, the same thing happened and I ended up just restoring an older snapshot just to get rid of the queue. Every time there was an unpack in sab, the system would slowdown and my plex streams will start to stutter.
I have read a number of posts on the topic. For my particular use case, I'm struggling to see the benefit of enterprise SSDs... On the one hand, they are powered on 24/7 and possibly subject to higher heat than they might be otherwise---assumed requirements in enterprise SSD design. On the other hand, my write load is miniscule. This is home use, so if the VMs/containers take an extra minute to boot, I don't care; reboots should be very infrequent anyway. I would characterize my write workload as more desktop-like. Then there's reliability and longevity; those are question marks in my mind.
 

Stephan

Well-Known Member
Apr 21, 2017
1,033
801
113
Germany
@K D Just a guess: You ran into thermal throttling with a lot of writes and the queue dragged everything else wanting to write something down with it.
 
  • Like
Reactions: K D

CJRoss

Member
May 31, 2017
98
6
8
I didn't even know Plex had DVR capabilities until you mentioned it... I probably won't switch any time soon, but since you mentioned it, I did a little light reading about it, and I might have to put Plex on my server to augment MythTV (as opposed to replacing it). Another thing that falls into the "devil you know" category. MythTV has plenty of warts, but after using Myth for over a decade, I'm pretty familiar with them. But I have been contemplating replacing my Myth frontend with Kodi. Seems like Myth + Plex + Kodi would be be pretty slick (at the expense of more complexity).
Unless Myth has gotten a lot simpler since I last used it, Plex has a lot less warts and is much easier to use.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
@K D Just a guess: You ran into thermal throttling with a lot of writes and the queue dragged everything else wanting to write something down with it.
Could be but I doubt it. The drive temp stayed at around 18-20c which is normal. Did not have the patience to research more and tried swapping the ssd and when everything worked I just moved on.
 

matt_garman

Active Member
Feb 7, 2011
230
68
28
How would you suggest I lay out everything if I went the Proxmox route? Specifically, the root FS for Proxmox itself, and also NAS aspect. Should I get like a single smallish SATA DOM for the base Proxmox install, then put all the real meat stuff on mirrored SSDs and spinners? Or should Proxmox live on a partition on the mirrored SSDs? And would the NAS be it's own VM (e.g. FreeNAS), or would the hypervisor itself do NAS duties (as I would do if going the CentOS route)?
Looks like my question has essentially already been asked and answered here: How should I partition my Proxmox install on 2x800GB Raid1 array? My takeaway is having the SSDs as one single ZFS mirror for Proxmox might be the most popular choice. I'm from the old school, too used to manual partitioning. :)

I'll just have to spend some time playing with Proxmox on a test machine.
 

sno.cn

Active Member
Sep 23, 2016
211
76
28
I run all of my Proxmox installs on mirrored 64GB SSDs, but Proxmox can certainly live on its own ZFS filesystem in a larger pool. In some cases I'm even running a couple of smaller VMs on my root pool.

Here's how one of my home Proxmox hosts looks: I have 2 x 64GB Micron P400e SSDs for root pool (onboard SATA), 8 x 400GB Seagate 1200 SSDs for VM storage (9340-8i), and 6 x 4TB WD RE4 SAS HDDs for data storage (9211-8i). All of my pools use ZFS mirrors.

On my HDD pool, I have filesystems for Data, ISO, Games, Media that I'm mounting into an Ubuntu LXC container, and then sharing out to my network. This container lives in my root pool, along with a Windows VM template, since these don't need any disk IO. You can also use one of the TurnKey container images for this if you want more of a preconfigured setup.

Then my media filesystem is mounted into another Ubuntu container running Plex. I used to mount a network share for Plex, but I think the setup I have now is ideal.
 
  • Like
Reactions: T_Minus

apnar

Member
Mar 5, 2011
115
23
18
I ran Proxmox at home for a while but gave it up for ubuntu on bare metal. Proxmox was good for when I had many VMs but now that most all my home services have moved to docker containers I find bare metal ubuntu with ZFS a much better fit (coming from someone that has been a redhat/centos dire hard for years). I run everything you mention (with the exception of Zoneminder) in docker containers. I've gone from a handful of linux VMs down to only 1. For the 3 VMs I still run (linux, macOS, windows) KVM with virt-manager works just fine.