General Home Server suggestions - Plex, HomeAssistant, Email with room to grow

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TonyArrr

Active Member
Sep 22, 2021
141
75
28
Straylia
Hey all, long time lurker and finally made myself an account to engage here.

I've been planning to move from my Mac Mini 2012 attached to a Drobo to a more.... respectable server for my home.
One of the key goals is to replace my rust drives entirely with flash, which is probably why this planning just keeps stretching on and on.

So currently I'm using about 8TB of data on 10TB of disks. About 6-6.5TB is media served through Plex, so I figure when I pull the trigger on all this I'll have an array of SATA SSDs for that, and use NVMe disks for the rest (photos, the email backend, HomeAssistant, InfluxDB).
I'll probably look to have a couple very large rust drives as internal local backup, for quick restores if I need it (but of course stuff I'd actually be troubled by losing if the place burnt down of course still gets backed up off site as well).

At this point I'm 99% sold on ZFS being the way to do for all the storage, particularly for snapshots and being able to use them for incremental backup both locally and remotely. RAIDz1 arrays for the bulk of the media, RAIDz2 on the more unique and troubling to lose stuff like photos and emails seems like a good trade off in terms of balancing capacity vs parity and losable disks.

As far as the rest of the system goes, I'm leaning towards some of the more recent AMD workstation level stuff, with an eye to initially get a entry level Threadripper and be able to grab a much higher end one in a few years if my needs evolve and more CPU grunt will benefit.
Having a bunch of parallel CPU bluster would be preferable to handle software based video encoding (both on the fly for Plex streams as well as when pulling stuff off bluray and dvds). I know plenty of CPUs come with h.264, h.265, VP9 and AV1 encode and decode built in, but everywhere I go the general consensus seems to always be if you CAN use software encoding and decoding, you get better results in image quality (and when encoding, you get files that decode easier) from making your computer do the extra leg work. Plus when new video formats come along, I can always download new software to support them, but hardware can be a lot harder to add support for.

Definitely needs to be a platform that supports ECC memory, I'm still learning the different types and how they impact different scenarios, but it has always seemed odd to me that ECC wasn't just straight up standard in computers. No matter what anyone tells you, Rowhammer is a design fault, not an exploit.

Gaming isn't really a concern, if I get into modern PC gaming in any real way I will build myself a desktop targeting that need, and I'm not really one for most types of multiplayer so I don't need the server to handle anything like that.

All these needs really cover what I use the Mini and various mini computers around the home for now, and what I'm aiming to build I'm hoping will open up new projects and be flexible, powerful and expandable to cover uses for years in the future, so I'm hoping I could have some opinions from y'all on what sort of hardware I should be looking for.
Probable interests, given the available resources, would be things like building my own voice assistant, using VMs to learn some more advanced networking skills, maybe setting up one of those federated social networking hosts and weeding myself off facebook.

Software architecture wise, I'm not sure which is the best approach: LXC containers, Docker, or whole VMs to give some separation to the logical units of "services" I'll be running. I've been looking at Proxmox a bit, but I'm not sure if that will abstract me just too far away enough from the implementation of everything that I'd miss out on learning some great SysAdmin skills. Would love to hear experiences and opinions!
(The current setup has the email server in one VM, Plex in another VM, and HomeAssistant with Influx and all it's pieces and addons in Docker.)

Hope the above doesn't make me sound too much of a nutter, I look forward to hearing what sort of tech it's worth me looking into, and what people with experience think of the goals and ideas above. Thanks in advance!
 

ttabbal

Active Member
Mar 10, 2016
759
212
43
47
CPU, they will both work fine for what you are talking about. I'm not 100% up to date on video transcoding, but I'm sure you can find info on that. If you want to do things like hardware passthrough, Epyc and Threadripper are nice as you have more PCIe lanes so you don't lose performance with multiple GPUs and such. One nice thing about Epyc, is buffered ECC support. Most ECC seems to be made for servers, and are buffered. "normal" systems can't deal with those, I don't think even Threadripper can do it.

For the array, I like ZFS a lot. I went with mirrors for easier changes and upgrades, and that rebuilding failed drives is much faster. The changes might get easier with ZFS adding support for restriping in place. And if you're running SSDs, the speed difference might not matter.

Proxmox is great, I highly recommend it. It doesn't hide much from you, and you can do everything on the command line if you really want to. I admin it over SSH as much as with the web UI. I like that you can do both containers and VMs at the same time. Some setups benefit from the isolation VMs offer, but many don't need it and you get better performance with containers. They also launch faster.
 

WANg

Well-Known Member
Jun 10, 2018
1,336
993
113
46
New York, NY
Hey all, long time lurker and finally made myself an account to engage here.

I've been planning to move from my Mac Mini 2012 attached to a Drobo to a more.... respectable server for my home.
One of the key goals is to replace my rust drives entirely with flash, which is probably why this planning just keeps stretching on and on.

So currently I'm using about 8TB of data on 10TB of disks. About 6-6.5TB is media served through Plex, so I figure when I pull the trigger on all this I'll have an array of SATA SSDs for that, and use NVMe disks for the rest (photos, the email backend, HomeAssistant, InfluxDB).
I'll probably look to have a couple very large rust drives as internal local backup, for quick restores if I need it (but of course stuff I'd actually be troubled by losing if the place burnt down of course still gets backed up off site as well).

At this point I'm 99% sold on ZFS being the way to do for all the storage, particularly for snapshots and being able to use them for incremental backup both locally and remotely. RAIDz1 arrays for the bulk of the media, RAIDz2 on the more unique and troubling to lose stuff like photos and emails seems like a good trade off in terms of balancing capacity vs parity and losable disks.

As far as the rest of the system goes, I'm leaning towards some of the more recent AMD workstation level stuff, with an eye to initially get a entry level Threadripper and be able to grab a much higher end one in a few years if my needs evolve and more CPU grunt will benefit.
Having a bunch of parallel CPU bluster would be preferable to handle software based video encoding (both on the fly for Plex streams as well as when pulling stuff off bluray and dvds). I know plenty of CPUs come with h.264, h.265, VP9 and AV1 encode and decode built in, but everywhere I go the general consensus seems to always be if you CAN use software encoding and decoding, you get better results in image quality (and when encoding, you get files that decode easier) from making your computer do the extra leg work. Plus when new video formats come along, I can always download new software to support them, but hardware can be a lot harder to add support for.

Definitely needs to be a platform that supports ECC memory, I'm still learning the different types and how they impact different scenarios, but it has always seemed odd to me that ECC wasn't just straight up standard in computers. No matter what anyone tells you, Rowhammer is a design fault, not an exploit.

Gaming isn't really a concern, if I get into modern PC gaming in any real way I will build myself a desktop targeting that need, and I'm not really one for most types of multiplayer so I don't need the server to handle anything like that.

All these needs really cover what I use the Mini and various mini computers around the home for now, and what I'm aiming to build I'm hoping will open up new projects and be flexible, powerful and expandable to cover uses for years in the future, so I'm hoping I could have some opinions from y'all on what sort of hardware I should be looking for.
Probable interests, given the available resources, would be things like building my own voice assistant, using VMs to learn some more advanced networking skills, maybe setting up one of those federated social networking hosts and weeding myself off facebook.

Software architecture wise, I'm not sure which is the best approach: LXC containers, Docker, or whole VMs to give some separation to the logical units of "services" I'll be running. I've been looking at Proxmox a bit, but I'm not sure if that will abstract me just too far away enough from the implementation of everything that I'd miss out on learning some great SysAdmin skills. Would love to hear experiences and opinions!
(The current setup has the email server in one VM, Plex in another VM, and HomeAssistant with Influx and all it's pieces and addons in Docker.)

Hope the above doesn't make me sound too much of a nutter, I look forward to hearing what sort of tech it's worth me looking into, and what people with experience think of the goals and ideas above. Thanks in advance!
Oh hey, welcome fellow MacMini guy! I have a 2011 MacMini that is used to run my music collection and do some Mac based stuff. Its not my daily "server", mind you. That being said, a useful old machine is still a useful machine - give it 16GB of RAM and an SSD, and it'll still punch above its weight.

For storage arrays, yeah, zfs is a good way forward, but whether you want to spend the money on an NVMe SSD array...is still something you need to figure out. The more lanes you want exposed to NVMe the more expensive is your board, and more power consuming. Of course, if you use SATA/SAS rust spinners, then the concern is how many disk bays do you need, and what performance characteristics you are looking for. Personally I would go with raidz2 or RAID10 - more tolerance to failed media, and the recovery time isn't that much off from raidz1.

In terms of transcoding, it's always a tradeoff between power efficiency versus speed - for most modern ASIC encoders the quality difference from the software version is not that far off for casual consumers (I am talking about 1080p30), but the speed gain is noticeable. Of course, running NVEnc on a separate card means that you are using more power to start, Quicksync is an Intel thing, and AMD VCN1/2 are not that well supported in Linux, and even in Windows, the performance is...not terrific. That being said, knowing how to virtualize/partition vGPU is a fairly useful skill. So maybe you might want to invest some time and money onto it.

When it came to software stuff, LXC/containers are more lightweight while VMs are more flexible - VMs used to have a much higher performance penalty, but with modern hardware it's less-and-less with each succeeding generation. My answer here is "both" and my hypervisor of choice is Proxmox (it's cheap, it's KVM with some niceties, and it's not VMWare/RHEL/Citrix). Unless your $dayjob uses it, I would not bother with ESXi.

In some cases, you can buy/build a more powerful do-everything machine (like a ThinkServer SR665 or its Supermicro counterpart), or split your storage and computing to 2 separate machines (like one that runs Proxmox and one that runs TrueNAS core), and then optimize your machine to handle the task you want. For example, maybe get a dedicated 8 bay NAS machine (something like a Qnap TVS873 or something similar) for storage, and for computing, something like a Supermicro SuperServer E301,and then wire them together using NFSv4 or iSCSI, either directly or through some kind of switch/router setup.

I think the more important questions to ask is:
a) How much room do you have in your home? Can you put a rack in?
b) What's the power circuit situation looking like? What's your tolerance on a high power bill? (You need to pay for the computing side, the storage side, the network side and the cooling, if applicable)
c) What about cooling? And whats your mitigation strategy in term of noise?
d) What's your anticipated data/computing usage like, and how much is it growing?
 
Last edited:

TonyArrr

Active Member
Sep 22, 2021
141
75
28
Straylia
Oh hey, welcome fellow MacMini guy! I have a 2011 MacMini that is used to run my music collection and do some Mac based stuff. Its not my daily "server", mind you. That being said, a useful old machine is still a useful machine - give it 16GB of RAM and an SSD, and it'll still punch above its weight.
Represent! It's still a sold machine for sure. Once I move everything from it it will probably be my day to day desktop, tbh. Never gives up!

Epyc and Threadripper are nice as you have more PCIe lanes
This was another reason I liked them, you can do anything with PCIe, so I figured that was a good way to stay flexible.

For the array, I like ZFS a lot. I went with mirrors for easier changes and upgrades, and that rebuilding failed drives is much faster. The changes might get easier with ZFS adding support for restriping in place. And if you're running SSDs, the speed difference might not matter.
Yeah I would generally prefer mirrors for the redundancy and how much easier they are to rebuild, but with SSDs, the dollar per GB comes into play. Buying enough flash to move everything onto and having 50% of it a duplicate will break any way I try to budget
That's why I figured split the difference and do RAIDz1, and then buy some cheap and large spinners and occasionally sync a snapshot over to them too.

It doesn't hide much from you, and you can do everything on the command line if you really want to. I admin it over SSH as much as with the web UI. I like that you can do both containers and VMs at the same time.
My answer here is "both" and my hypervisor of choice is Proxmox
Ok I'll look more at it, dive through more docs and such.

whether you want to spend the money on an NVMe SSD array
I kinda do, for the stuff that speed matters anyway, like hosted apps and databases. Not a huge NVMe array mind you, I figured 2 to 3 TB with redundancy and everything else can live on cheaper SATA SSDs

In terms of transcoding, it's always a tradeoff between power efficiency versus speed
Exactly. I think the sticking point someone made to me elsewhere is that software encoders can be updated endlessly to be better, and your limiting factor is how much grunt your CPU has to roll with it. My Mac Mini encodes my 4k BluRays in H.265 at the moment, it just takes a long time but the results are fantastic, my older TV which lacks hardware decode for 265 can still play them back at 25fps, which is all you really need.

But with hardware encoders, most improvements to them require new hardware. So to me, hardware encoders being there is fine, but given the choice I'd put my money towards more general purpose computing power than support for hardware encoders that are encoding specific.

knowing how to virtualize/partition vGPU is a fairly useful skill
This just made my list of "like to learns", I can think of a lot of applications for that

you can buy/build a more powerful do-everything machine (like a ThinkServer SR665 or its Supermicro counterpart), or split your storage and computing to 2 separate machines
Given space, I'd probably rather built the more powerful do everything, and use lower powered but suitable-as-interface computers to just control the jobs on the server. I probably should have been born in the time of the mainframe

a) How much room do you have in your home? Can you put a rack in?
I'm in an apartment, bout 70 square meters. I'm always looking for a small house to move to, in which case a full rack would be going in, just to ethernet the heck out of the place.
Currently, I could either build a large X-ATX sized workstation system and fit it under my desk, or I could install a 1U or 2U system in the space behind the TV

b) What's the power circuit situation looking like? What's your tolerance on a high power bill? (You need to pay for the computing side, the storage side, the network side and the cooling, if applicable)
My power bills are not huge, the city mandated a move to Solar+Wind+Storage for infrastructure a few years back and the power prices have only been dropping since we made it to 100% ☺
That said, I can only draw about 30 amps through a single circuit, and 10 amps per GPO.

c) What about cooling? And whats your mitigation strategy in term of noise?
Part of it was "use all flash, and only spin up the hard drives when running a backup", but I know fan noise will be a thing. While the fan noise isn't a bother to me, I'd probably need to make sure I have some scheduling in the system to keep the tasks that generate the most heat to be a daytime thing so I can get to sleep.
I could maybe, maaaybe run liquid cooling to a radiator on the balcony though, but I'd need to look into it in more detail to be sure.

d) What's your anticipated data/computing usage like, and how much is it growing?
I expect I'll keep racking up the movies and TV shows, at the moment its about 400gb a year (garram Blurays!) although that's an unscientific estimation. With a beefier processor available though, I might start getting the file sizes more reasonably down, so that could slow...
Video encoring is the most intensive thing I do at the moment in terms of compute, through I more and more want to detach from the big tech companies for things like smart home, correspondence, so I may end up needing more grunt to take on that sort of work. If I get much smarter and much boreder, a local voice assistant might be something I'd do.

Overall I think I'd rather build a system that covers all my "now" needs, and has the flexibility for me to add to it for things like self hosting and the like, and if I then don't expand into those sorts of things because life, no harm, it's still a good system, right?
 

nickf1227

Active Member
Sep 23, 2015
197
128
43
33
Just to throw a wrench in, have you considered older enterprise gear?
1632453045978.png
1632453086872.png

IMO, it will give you a really good chance to get super high-end equipment for pennies on the dollar.
If Plex, movies, transcoding, etc are important to you they are fairly adequate to do plenty of transcodes
1632453174087.png

Add in a dedicated GPU

And you have quite a beast of a machine xD

Plenty of drive bays for all the flash you could want, or you can get 3.5" drive bays instead and load one up with tons of storage.

Pick up the faster E5-2000 v2 series chips, and you will really be screaming with plenty of room to grow.
1632453415894.png

These platforms run on DDR3 RDIMMs, which can be found for extremely cheap
1632453501910.png
 

TonyArrr

Active Member
Sep 22, 2021
141
75
28
Straylia
Just to throw a wrench in, have you considered older enterprise gear?
I have nothing really against older gear, though I have not seen anything like the value proposition offers that you've just found. My eBay-fu could use a lot of help

I think part of that is trying to get my head around the different generations, sockets, model numbers etc. I can look at model numbers of consumer-everything and know exactly where in age, performance, generation it is and what works with it, but with enterprise grade sorta stuff seems to be such a mix-and-match everywhere I look, especially CPUs.

I mainly was looking at new/current gen stuff because I figured learning all the ins and outs while sticking within a year or so of tech releases would be more manageable, and because I figured it would be easy to make sure I could work out and acquire a system that I won't need to upgrade for years (aside from adding storage). But if there are dual xeon systems out there for 200 bucks US, then that opens a lot more up for me!
 
  • Like
Reactions: nickf1227

nickf1227

Active Member
Sep 23, 2015
197
128
43
33
The two model numbers I posted are both good, and you can add Dell R720 which is of the same generation, however it's selling for alot more than Cisco or HP for whatever reason :) So is Supermicro, actually.

I have the 1U version of that Cisco server and I like it.
 

gregsachs

Active Member
Aug 14, 2018
589
204
43
Intel S2600GZ/GL is also a good series that does E5 V1/V2, readily available updates
Here is a 1 U version, looks like it has the CX-3 dual 40/56gb card as well
I ran one of these with an external HBA to a storage shelf for 3.5" drives for a long time; then switched to a 2u 12x3.5" version.
 

TonyArrr

Active Member
Sep 22, 2021
141
75
28
Straylia
So as I poke around, I'm find that once those Ciscos and HPs get to the models with more than 10x 2.5 bays, the cost is about the same as the R720s are for 16 and 24 bay models, ~$525+.
The high bay numbers are useful for me, since it means to grow my available storage, it means I can start of with a batch of smaller SSDs, and instead of having to buy larger SSDs than the previous purchase to replace the existing ones, I can buy the same capacity and just add a new vDev, which should mean each expansion gets cheaper with time, and hopefully by the time I'm having to replace an existing vDev with larger drives, hopefully those larger capacity SSDs will be plenty cheaper.
(weird logic, but I thin it makes sense. If you have good reason for me to not follow this logic, PLEASE let me know so I can course correct)

I'm leaning between an R720 with 16 bays (caddies included) and a pair of E5-2690 v2s with 64GB of RAM:

and a R720XD with 24 bays (front, there are also 2 in the back), enquiring about the cost of it with a pair of E5-2680 v2s, with 64GB of RAM:
I'm wanting to have a bit of an upped CPU because from 2650 to 2690 really doesn't seem to make a big difference in the non-CTOing orders, and frankly, if there was a time that I would eff up a straightforward upgrade, it would be installing one of these CPUs. Can almost guarantee I'd drop it at the last second and bust some socket pins

Any thoughts on those? I'll update when I hear back of the cost of upping the CPU on the second.

P.S @gregsachs how do you get the URL to unfurl like that and show a preview in the post?
 

gregsachs

Active Member
Aug 14, 2018
589
204
43
Update:
The second one is $50 to jump to 2697 v2s
So as I poke around, I'm find that once those Ciscos and HPs get to the models with more than 10x 2.5 bays, the cost is about the same as the R720s are for 16 and 24 bay models, ~$525+.
The high bay numbers are useful for me, since it means to grow my available storage, it means I can start of with a batch of smaller SSDs, and instead of having to buy larger SSDs than the previous purchase to replace the existing ones, I can buy the same capacity and just add a new vDev, which should mean each expansion gets cheaper with time, and hopefully by the time I'm having to replace an existing vDev with larger drives, hopefully those larger capacity SSDs will be plenty cheaper.
(weird logic, but I thin it makes sense. If you have good reason for me to not follow this logic, PLEASE let me know so I can course correct)

I'm leaning between an R720 with 16 bays (caddies included) and a pair of E5-2690 v2s with 64GB of RAM:

and a R720XD with 24 bays (front, there are also 2 in the back), enquiring about the cost of it with a pair of E5-2680 v2s, with 64GB of RAM:
I'm wanting to have a bit of an upped CPU because from 2650 to 2690 really doesn't seem to make a big difference in the non-CTOing orders, and frankly, if there was a time that I would eff up a straightforward upgrade, it would be installing one of these CPUs. Can almost guarantee I'd drop it at the last second and bust some socket pins

Any thoughts on those? I'll update when I hear back of the cost of upping the CPU on the second.

P.S @gregsachs how do you get the URL to unfurl like that and show a preview in the post?
I just pasted the link, it must be magic!

I personally run dual 2650 v2, I found them to be a good balance between cost and performance, but for an extra $50 it is probably worthwhile to double your cpu power; you won't be able to get a pair of 2697v2 for $50 anytime soon.