In this thread, I posted pics of my recently-rebuilt home server. Now that the hardware part is done (maybe?), I'm thinking it's time to revisit the OS/software side. So I'm soliciting feedback.
I don't think anything is really broken, but mostly non-optimal due to having grown up organically. Framing it in terms of my goals, it boils down to two main factors simplicity and ease of backup/restore (and a related goal of data integrity/fear of bit-rot).
Here are the roles of this server:
So here's how I have this set up now:
Not too bad, but kind of a patchwork, and I think it could be somewhat cleaner. Came about via organic growth.
A bit of backstory: several years ago, specifically, before I had kids, I had a lot more time. So I didn't mind (in fact I enjoyed) having multiple systems, working with "rolling" Linux distributions, building custom packages, tweaking things incessantly. These days I have far less "play time" for home infrastructure than I used to. So I'm always trying to think of how I can make things simpler. Simpler saves time, but I also believe it reduces the frequency of issues. So one of my first steps was to standardize on CentOS. Not because it's fun or sexy, but it's what I use at work; I already have a lot of "thought capital" invested in it.
You can see from the above that I'm making use of containers, specifically systemd-nspawn. This came about somewhat organically. I used to have all this stuff under the main host (bare metal) OS. But then any kind of upgrade became a nightmare with package collisions ("RPM hell"). This is mostly due to MythTV, which has a trillion dependencies that sometimes conflict with the base packages. A VM felt too heavy/overkill for this application, and I finally got up to speed with containers. Specifically, the (original?) container, chroot. I used this happily under CentOS 6. And then I found chroot didn't really work under CentOS 7, and that's what turned me on to systemd-nspawn, aka "chroot on steroids".
I've been lurking around this forum more than usual lately, and I get the sense that if I was doing this from scratch, something like Proxmox VE would probably be the suggestion. And I've done some reading on it, and it looks pretty slick. But in my mind I'm weighing "the devil I know" (CentOS) versus the one I don't (Proxmox). And fundamentally, is there anything I can't really do with CentOS that I can do with Proxmox?
Here's what I'm thinking so far as a way to re-architect this:
Seems pretty straightforward, and not too terribly different than what I have now. So first question: does this architecture make sense? Second question, how best to organize storage for all this? Specifically:
Lastly, in terms of storage capacity, I'm mostly OK, but the distribution between SSD, spinning disk, and what is mirrored/not-mirrored is sub-optimal. Ideally I'd like to have something like this:
I'm mostly there, but clearly need to re-org the SSDs. And I want to do that without breaking the bank! Oh, and I'm limited to six total SATA devices (want to do this exclusively with motherboard SATA ports).
If you managed to read through all that, my thanks! (Or maybe condolences?) If you have an opinion on how to best approach this, I'm happy to hear it.
I don't think anything is really broken, but mostly non-optimal due to having grown up organically. Framing it in terms of my goals, it boils down to two main factors simplicity and ease of backup/restore (and a related goal of data integrity/fear of bit-rot).
Here are the roles of this server:
- MythTV backend. Single HDHomeRun tuner right now, maybe two some day.
- Zoneminder PVR for two, soon to be three, high-resolution cameras doing motion detection.
- Media storage/NAS. Who on this forum doesn't have a big media storage requirement?
- Video transcoding via Handbrake.
- Hobby software development (lightweight fun stuff when time permits).
So here's how I have this set up now:
- Core: Xeon D 1541) w/32 GB RAM. More than adequate for current or anticipated needs.
- Crucial m.2 mx300 1TB SSD + Intel 320 300GB in Linux software RAID-1 for OS (yes, huge size discrepancy, will upgrade the 300GB soon). I also use this mirror for my personal home directory. It houses the MythTV backend and Ubuntu-Handbrake containers (both via systemd-nspawn). The main OS is running the sole MariaDB and Apache instances, shared by both Zoneminder and MythTV. Once the other drive is upped to 1TB or so, I should have plenty of space for these purposes.
- Dual Hitachi HE8 8TB drives for media and other storage. Another Linux software RAID-1 setup.
- Single Crucial 2.5" SATA mx300 1TB SSD. This houses my Zoneminder container (via systemd-nspawn). This arose from previously being on a separate system.
- One WD RED 6TB disk as a "catchall" for temporary storage. Mostly this is MythTV recordings. This data falls in the "annoying but OK to lose, not backed up" category.
- Except for the Ubuntu container, all OSes are CentOS 7.x.
- All filesystems are xfs.
- Backups: first is a secondary system with a big ZFS raidz3 store, custom backup script (basically just an rsync wrapper). Second is a CrashPlan subscription.
Not too bad, but kind of a patchwork, and I think it could be somewhat cleaner. Came about via organic growth.
A bit of backstory: several years ago, specifically, before I had kids, I had a lot more time. So I didn't mind (in fact I enjoyed) having multiple systems, working with "rolling" Linux distributions, building custom packages, tweaking things incessantly. These days I have far less "play time" for home infrastructure than I used to. So I'm always trying to think of how I can make things simpler. Simpler saves time, but I also believe it reduces the frequency of issues. So one of my first steps was to standardize on CentOS. Not because it's fun or sexy, but it's what I use at work; I already have a lot of "thought capital" invested in it.
You can see from the above that I'm making use of containers, specifically systemd-nspawn. This came about somewhat organically. I used to have all this stuff under the main host (bare metal) OS. But then any kind of upgrade became a nightmare with package collisions ("RPM hell"). This is mostly due to MythTV, which has a trillion dependencies that sometimes conflict with the base packages. A VM felt too heavy/overkill for this application, and I finally got up to speed with containers. Specifically, the (original?) container, chroot. I used this happily under CentOS 6. And then I found chroot didn't really work under CentOS 7, and that's what turned me on to systemd-nspawn, aka "chroot on steroids".
I've been lurking around this forum more than usual lately, and I get the sense that if I was doing this from scratch, something like Proxmox VE would probably be the suggestion. And I've done some reading on it, and it looks pretty slick. But in my mind I'm weighing "the devil I know" (CentOS) versus the one I don't (Proxmox). And fundamentally, is there anything I can't really do with CentOS that I can do with Proxmox?
Here's what I'm thinking so far as a way to re-architect this:
- Base CentOS 7.x system does only NAS duty, monitoring, and container storage. Maybe also the system I log in to for my development and other playing.
- MythTV container: will have own MariaDB and HTTP instances. Originally I thought it would be more efficient to have a single centralized instance of those services. But then I got to thinking, what if one day ZM and MythTV have conflicting dependency requirements? Or I want to add a new service that conflicts with either existing one? Systemd-nspawn supports virtual networking, so my new mode of thinking is to make every service a "lightweight VM" using systemd-nspawn.
- Zoneminder container. Similar thoughts as MythTV container.
- Anything new that comes along: throw it in its own container, unless it's simple and part of the stock OS.
- Ubuntu/Handbrake container. This probably can use shared networking as it currently does. More like chroot than a VM.
- Backups? What I have now is "OK", but I think it can be better. Also, I haven't addresses the data integrity/bitrot problem.
Seems pretty straightforward, and not too terribly different than what I have now. So first question: does this architecture make sense? Second question, how best to organize storage for all this? Specifically:
- Reading threads here, I get the sense most people are going the lightly used enterprise SSD route over cheap new consumer SSDs. Should I grab (for example) an 800GB S3700 from ebay for the system drive I need to upgrade? I'm still within my return period (sans 15% restocking fee) for that MX300 M.2 drive, should I return it and get a used enterprise drive?
- Filesystem: Linux software raid (md) and xfs/ext4 may not be exciting, but they are time-tested and battle proven. Lots of love here (and other places) for ZFS; I use it for my backup server and think its feature-set is impressive. But, see here for one opposing view (summary: "stable but not production ready"). And it still seems somewhat duct-taped together under Linux. But with the zfs send/receive functionality, I feel like backups become better and simplified. Btrfs seems perpetually stuck in the "only 1-2 years away from production ready". With these next-gen filesystems, am I trading one set of problems for another? Or is it a net win?
- What is the prevalence of bit-rot anyway? I've never experienced it, but then again, how would I know? But spinning drives already have block-level checksums built in, I assume SSDs do as well. It's a read error if the checksum isn't right, so there's already some integrity protections by default.
Lastly, in terms of storage capacity, I'm mostly OK, but the distribution between SSD, spinning disk, and what is mirrored/not-mirrored is sub-optimal. Ideally I'd like to have something like this:
- 250 GB mirrored SSDs: base system + containers
- 400 GB mirrored SSD for Zoneminder events
- 300 GB mirrored SSD for home directory
- at least 5TB single spinning rust for MythTV recordings and random non-important stuff
- at least 8TB mirrored spinning rust for media storage and Zoneminder archives
I'm mostly there, but clearly need to re-org the SSDs. And I want to do that without breaking the bank! Oh, and I'm limited to six total SATA devices (want to do this exclusively with motherboard SATA ports).
If you managed to read through all that, my thanks! (Or maybe condolences?) If you have an opinion on how to best approach this, I'm happy to hear it.