please help with choosing the distro for my home server.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vl1969

Active Member
Feb 5, 2014
634
76
28
hello, I have been lurking here for a wile, and now I again need some opinion and/or help.

in the last year, I have been trying to build out a home server to be used for virtualization and file server roles.
my needs/wants are:

what I want:
#1. it will be a Linux OS.
I am currently looking at CentOS, OpenSuse and Fedora.
I tried Ubuntu, Proxmox, Debian and a couple of custom distros like NetHServer etc.
#2. I plan to run KVM for vm hosting
#3. ALL data will be hosted on the main host. I mean I have a super micro 24 bay server that will be my main and only Host and it will host the main distro and all my data drives.
#4 I really want a WebUI for both host management and vm management. I do not mind occasional CLI intervention if needed but GUI is a must and a webGUI is a really really nice to have so I can access it from outside if need be.

I am currently have a bunch of disks in a raid 1 BTRFS pool. my server does not have hardware raid. thus I choose BTRFS for both a data protection and raid setup. but that was also created lots of difficulties for setting up my host as not many distros support BTRFS fully if at all. it creates some othe difficulties and restrictions along the way.
so, if some one can suggest something that will work and be reliable I will definitely consider.
I was looking into snapRaid recently and if I can figure out how to use it for my setup it is one of the top candidates for me.

my main use of the server is a shared centralised space for all computer in the family
and media storage and server. I plan to run several helper VMs like OpenMediaVault or owncloud
maybe Plex server or Emby server. a vm with subnzb and related soft
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Linux isn't my specialty but I will say that OpenSuse is a win when i do need to use it thanks to their susestudio tool(and i did take a minute to check they do support BTRFS)
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I'm curious as to why you ruled out Proxmox - it sounds like a good fit for you. As far as I know it doesn't support btrfs, but you don't want to host VMs on btrfs anyways and proxmox does support ZFS which doesn't have the performance issues that btrfs does with VMs while still having just as good data protection. And it comes with the Web-UI to manage the host and its VMs (and LXC containers). It does use KVM as the hypervisor for VMs, but it doesn't use libvirt, so you can't use many of the other KVM-management tools that are out there (eg. virt-manager, virsh, etc.)

All of the rest of the functionality you need gets pushed into VMs or containers - I've got a few CentOS 7 containers running on a Proxmox 4 at home and I would suggest giving them a try for any linux guests you need and only use full VMs for non-linux guests (Windows or BSD-based eg. FreeNas, pfSense, etc.)

Snapraid is awesome for media servers, or other things with a lot of data that never changes, but you don't want to host VMs on it. In general you shouldn't save any files on snapraid that get modified as it can affect your ability to recover until the next sync is run (most people have that running nightly). Adding new files to a snapraid does not have that issue - protection for existing data is not affected, only the new files are not protected until the next sync.

I would say divide your disks into two groups. The first group is managed by the host (ZFS if proxmox, else probably ext4 or xfs on top of md-raid (maybe with LVM between)) and holds the host OS plus all of the VM disk files. The second group gets passed through to a VM (preferably connected to a different controller/HBA and pass the entire controller to the VM, but individual disk passthrough would work too) and assembled into a snapraid to hold all of the bulk data. If you have a lot of data that isn't a good fit for snapraid just make the first group big enough to hold it all - with ZFS you can easily make it a hybrid pool with a few spinners in a space-efficient parity-raid for capacity and some SSD as cache/write-log to make it perform much better than parity-raid normally would.


As one other point, if you end up playing with CentOS/Fedora you might like Cockpit. It comes installed by default in some Fedora spins and is just a 'yum install cockpit' away in CentOS 7. It provides a nice little per-server web UI for basic management.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I'm curious as to why you ruled out Proxmox - it sounds like a good fit for you. As far as I know it doesn't support btrfs, but you don't want to host VMs on btrfs anyways and proxmox does support ZFS which doesn't have the performance issues that btrfs does with VMs while still having just as good data protection. And it comes with the Web-UI to manage the host and its VMs (and LXC containers). It does use KVM as the hypervisor for VMs, but it doesn't use libvirt, so you can't use many of the other KVM-management tools that are out there (eg. virt-manager, virsh, etc.)

All of the rest of the functionality you need gets pushed into VMs or containers - I've got a few CentOS 7 containers running on a Proxmox 4 at home and I would suggest giving them a try for any linux guests you need and only use full VMs for non-linux guests (Windows or BSD-based eg. FreeNas, pfSense, etc.)

Snapraid is awesome for media servers, or other things with a lot of data that never changes, but you don't want to host VMs on it. In general you shouldn't save any files on snapraid that get modified as it can affect your ability to recover until the next sync is run (most people have that running nightly). Adding new files to a snapraid does not have that issue - protection for existing data is not affected, only the new files are not protected until the next sync.

I would say divide your disks into two groups. The first group is managed by the host (ZFS if proxmox, else probably ext4 or xfs on top of md-raid (maybe with LVM between)) and holds the host OS plus all of the VM disk files. The second group gets passed through to a VM (preferably connected to a different controller/HBA and pass the entire controller to the VM, but individual disk passthrough would work too) and assembled into a snapraid to hold all of the bulk data. If you have a lot of data that isn't a good fit for snapraid just make the first group big enough to hold it all - with ZFS you can easily make it a hybrid pool with a few spinners in a space-efficient parity-raid for capacity and some SSD as cache/write-log to make it perform much better than parity-raid normally would.


As one other point, if you end up playing with CentOS/Fedora you might like Cockpit. It comes installed by default in some Fedora spins and is just a 'yum install cockpit' away in CentOS 7. It provides a nice little per-server web UI for basic management.

I did not really rule out Proxmox per see, but a the time I was dead set on using BTRFS for my data drives and with Proxmox is was a hustle to do.
again ZFS might be good but it seams (to me anyway) a bit more involved when it comes to setting up and managing pools. also it is not as flexible as BTRFS if you want to expand a pool, re-size pool.
much more so than btrfs. as it stands now I am thinking of setting up the server as folows.

I have a couple of SSDs GB that I can use to setup a raid1 system drive, if I ever figure out how to do a bootable raid install. I mean install where if any one drive fails it would still boot and keep the system running untill faild drive is repalced.

I have 2x1TB drives that I want to use also in raid1 setup for all my VM files except data. I plan to only have system drives for VMs any and all data used inside would be on the main data pool.

now for main data pool to be used and shared all over, I have a 1x3TB and 2 0r 4 2TB drives that are currently build a btrfs raid1 pool.
most of the data I have is media files and backups from several PCs in the household. it is not changing much. I also might get a 1TB drive or two (they are currently used in one of the workstations but once I have a running server all data would be moved there.) that I might use to provide a temp storage ot cashe for my downloading setup.(make a common share used by vm that runs OMV and vm that runs subnzb and other downloading software to keep files that are downloading and/or not sorted yet temporary until they are processed. yes there is a risk of loosing a file or two but not that important)
so it's either a snapraid for the main data pool or btrfs I already have.
I am loading up a fedora server iso as we speak. want to try it out. see if it works for me.

my other main issue before was luck of a nice and competent WebUI for system and VM management.
a side from Proxmox. but I think I figure something out and will keep an update if it works out.
but I want others take on what I can do and how.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
For what it's worth, the Proxmox installer will install to ZFS - including mirror or RaidZ/Z2/Z3. Similar to Raid 1, 5 or 6. It does all the setup for you. So there is nothing to 'figure out' to have a mirror or parity boot that survives a drive loss.

BTFS is interesting but it is still not quite ready for use.
 
  • Like
Reactions: canta and Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,812
113
I have to admit, when I started Proxmox I was terrible at it, coming from a Windows background. Now, I think they did a great job with Proxmox VE 4.0/ 4.1. Much better than the early 2.x and 3.x versions.

This book was for the 3.x version but includes Ceph configs: Mastering Proxmox, Wasim Ahmed, eBook - Amazon.com

If you spend offline time and want to get comfortable with it that book is a good read. For smaller clusters (<16 nodes say) Proxmox is really nice. It has decent hardware support since it is Debian based and they are staying somewhat current on newer kernels. LXC was a great addition over OpenVZ and you can run Docker on the base OS - although with no WebUI integration.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I still struggle recommending Proxmox. As Patrick notes the current release (4.x) is pretty solid and useful. KVM and LXC are clearly the on the come and beating (or within sight of winning against) VMware, Hyper-v and Xen. They done a fabulous job integrating key support technologies (zfs, ceph, etc).

But...the developers remain arrogant, with a real 'our way or else' attitude. I've not yet gotten over their willingness to charge money for the efforts of others. And they play fast/loose with legitimate IPR challenges (e.g., I love their integration of ZFS but their distribution model ignore the terms of the ZFS CDDL license).

To be fair, I have to admit I've moved my lab back to Proxmox (from Hyper-v) because they hit the right technical buttons. But I cringe a lot at recommending it too loudly because of the behavior/attitude of the principle players behind it.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Proxmox forum is not usefull..
I think. They monitor proxmoc mailing list actively.
Join their dev mailing list and post your opinion.

Starting from 3.4... Very stable...

Ovs Is nice too.

If you know like Linux overall system and packages.. You should have good sailing for troubleshooting