Storage Server w/Mergerfs + Snapraid + Proxmox + ZFS??

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
My plan is to test out a new storage server setup that consists of an Ubuntu Server 20.04 OS using Mergerfs to pool the drives, Snapraid for parity, Proxmox for VMs, and ZFS for the both the root file system (mirrored Optane drives) and a RAID10 vdev for a write cache/VM datastore.

I've been researching this for the past few weeks and I've seen a lot of "guides" that show how to install the OS, and then move the root FS partitions over to a newly created ZFS mirror. But I've also run into just as many people/comments that have had issues doing this. So before I take the plunge, I'm wondering if anyone here has done something similar to this and can share any tips, advice, and/or cautionary tales.

The goal of this setup would be to have my storage setup as follows:
  • ZFS Mirror (Optane NVMe's) - For rootFS + Docker appdata datasets
  • ZFS RAID10 zpool (s4600 SSDs) - VM datastore + write cache for mergerfs pool
  • Mergerfs Pool (bulk spinner disks) - Primarily media storage
Any feedback/suggestions would be much appreciated.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Over the past couple of years you've written many posts on your server evolution and how effectively you've used UnRaid to serve your need. Given that, I'm curious what is motivating this. Are there particular use cases that you can't handle with UnRaid, limitations you are trying to overcome, or other annoyances?

You'll get better feedback if we understand what your motivation and requirements are.
 
  • Like
Reactions: Patrick

apnar

Member
Mar 5, 2011
115
23
18
Having just built a 20.04 box and wanting to leverage the new ZFS root file system options built-in I'll let you know a few quick things. The root on ZFS is only available using the Desktop installer, it is not available on the Server one. You must use UEFI boot. And there is no way in the installer to have the ZFS root installed on more than one disk.

I ended up installing using the desktop version to a single disk, then switching things to command line boot only ("systemctl set-default multi-user.target"). I then copied the partition table from the boot drive to the one I wanted to mirror it to and added those partitions into the two zpools. In the end it worked fine and I ended up with what I wanted.

More general question though, why are you using your best/fastest drives for the boot/root partitions? I usually use whatever old SSDs I have laying around for those and save the fast stuff for my actual workloads or ZFS ZIL or L2ARC drives.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Over the past couple of years you've written many posts on your server evolution and how effectively you've used UnRaid to serve your need. Given that, I'm curious what is motivating this. Are there particular use cases that you can't handle with UnRaid, limitations you are trying to overcome, or other annoyances?
You'll get better feedback if we understand what your motivation and requirements are.
Half of this is my constant curiosity about what else is out there and how can I constantly improve my setup. The other half is my frustration with IOwait issues pertaining to Unraid's btrfs cache pool configurations. I have configured "workarounds" to deal with those issues/shortcomings but I'm still always on the lookout for a better way to do things.


Having just built a 20.04 box and wanting to leverage the new ZFS root file system options built-in I'll let you know a few quick things. The root on ZFS is only available using the Desktop installer, it is not available on the Server one. You must use UEFI boot. And there is no way in the installer to have the ZFS root installed on more than one disk.

I ended up installing using the desktop version to a single disk, then switching things to command line boot only ("systemctl set-default multi-user.target"). I then copied the partition table from the boot drive to the one I wanted to mirror it to and added those partitions into the two zpools. In the end it worked fine and I ended up with what I wanted.

More general question though, why are you using your best/fastest drives for the boot/root partitions? I usually use whatever old SSDs I have laying around for those and save the fast stuff for my actual workloads or ZFS ZIL or L2ARC drives.
Thanks for the info. I've considered just using a cheap pair of 120GB S3500's that can be had for $20 a piece. But the problem is I don't currently have any place to put them in my system as all 8 sata ports are taken. I could put them in my SAS disk shelf but I'm guessing my HBA (9300-8e) won't pass trim commands.
 

markarr

Active Member
Oct 31, 2013
421
122
43
Any reason you dont want to use proxmox as the base? It does zfs (w/e disks you want) boot out of the box with no fuss. Have your rootfs and your 10 disk pools for vms and LXC, for docker install on the base install or put it in a lxc or vm. I am currently running all of of the above. I have mergerfs for bulk media and then I have my proxmox mount that for its backup location and then I protect that with snapraid.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Any reason you dont want to use proxmox as the base? It does zfs (w/e disks you want) boot out of the box with no fuss. Have your rootfs and your 10 disk pools for vms and LXC, for docker install on the base install or put it in a lxc or vm. I am currently running all of of the above. I have mergerfs for bulk media and then I have my proxmox mount that for its backup location and then I protect that with snapraid.
I've considered that but I was more a fan of adding Proxmox as a package after the fact. I've heard of many Proxmox users having issues adding packages/applications to the base OS after the fact and I don't want any limitations with the base OS. Have you run into any issues like this?
 

markarr

Active Member
Oct 31, 2013
421
122
43
I have only added: docker, snapraid, megerfs, and msmtp. So my installs are limited on the base os. Proxmox has built in data collector that i send to a grafana LXC, I have also tweaked the unraid Plex grafana dashboard to work with my plex lxc.

The one thing I dont know that you have with your setup is the gpu for plex. I know proxmox has pci passthrough I have not used it.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I have only added: docker, snapraid, megerfs, and msmtp. So my installs are limited on the base os. Proxmox has built in data collector that i send to a grafana LXC, I have also tweaked the unraid Plex grafana dashboard to work with my plex lxc.

The one thing I dont know that you have with your setup is the gpu for plex. I know proxmox has pci passthrough I have not used it.
I actually use an iGPU for Plex now. I'll definitely need that to work. I do not plan to use any LXC, only docker containers with docker compose.
 

markarr

Active Member
Oct 31, 2013
421
122
43
I dont sleep the drives. I have gone back and forth on that but most of my drives are from older enterprise storage so the power on hours were high when I got them so didnt want to take a chance of the spin cycles taking them out.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
Got it.
Proxmox status daemon kept the drive spinning all the time.
I switched from Proxmox to Ubuntu for that reason.

I watch 1-2 hours media at night. No reason for me to spin up the bulk media all day.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Got it.
Proxmox status daemon kept the drive spinning all the time.
I switched from Proxmox to Ubuntu for that reason.

I watch 1-2 hours media at night. No reason for me to spin up the bulk media all day.
Thanks for this info. I definitely sleep my drives and want that ability. It's one of the reasons I use non-striped arrays.
 

markarr

Active Member
Oct 31, 2013
421
122
43
Looks like if you exclude your disks from lvm, pvestatd does not scan those. So you should be able to spin down drives in proxmox that way.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Looks like if you exclude your disks from lvm, pvestatd does not scan those. So you should be able to spin down drives in proxmox that way.
+1. The bug that causes pvestatd to wake up the drives is actually a bug in LVM and excluding them does indeed allow them to spin down.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
BTW, if what you want is the Proxmox installer (for ZFS) and not Proxmox itself you could always install using their installer and then “apt remove —purge” the pve components. That would leave you with a (mostly) stock Debian install on ZFS root with redundancy (Debian libraries with the Proxmox kernel, actually).

End of the day I’d just prefer Ubuntu 20.04. Don’t really need Proxmox for anything anymore as all my services have migrated to Docker images. I just really like how Proxmox did their installer!
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I honestly don't need Proxmox for much as this server is only running a single Windows Server 2k16 VM. Everything else is running in docker. I also prefer to use Ubuntu 20.04 if I can. So installing the Proxmox kernel to just run a single VM and then trying to piece everything else together on a non standard Debian kernel is not super attractive.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Got it and agree. Of course your alternative - if you want redundant ZFS root - is a challenging semi-manual install of Ubuntu. Choose your challenge!
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Got it and agree. Of course your alternative - if you want redundant ZFS root - is a challenging semi-manual install of Ubuntu. Choose your challenge!
Honestly, I'm thinking I will probably just install UbuntuSvr 20.04 on a RAID1 mirror. I liked the idea of the zfs mirror for easy snapshot rollback in case something went wrong during a kernel update/upgrade but it's probably not worth the hassle.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
For me, I don't even have OS disk raid mirror.

My plan,
takes 5-10 min to pxe boot install new Ubuntu os.
5-10 min to run my ansible playbook or shell script to install all the required packages.
5 min to restore OS config , docker config and Emby metadata files.
then docker-compose up.

How often Intel SSD OS goes bad?
For STH forum, it was 100%.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
For me, I don't even have OS disk raid mirror.

My plan,
takes 5-10 min to pxe boot install new Ubuntu os.
5-10 min to run my ansible playbook or shell script to install all the required packages.
5 min to restore OS config , docker config and Emby metadata files.
then docker-compose up.

How often Intel SSD OS goes bad?
For STH forum, it was 100%.
I really need to get into ansible...clearly. I don't have any kind of automation like that handy. If I did, I probably wouldn't worry a mirror either. How do you have your OS config and docker config backed up for quick and easy restore?