Proxmox AIO

Option 1,2,3 or other?


  • Total voters
    7
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ahira

New Member
Aug 27, 2018
3
0
1
Hello all,
Thanks in advanced for reading and your feedback. I have a Cisco UCS server with 8 SAS drives, dual 2660 v4, 384GB RAM. I’m trying to map out the best way to utilize it for what I need. I’m coming from an UnRaid gaming machine that runs some gaming VM and all my media dockers (plex, Radarr, Sonarr, etc). I’ve done a lot with ESXi, but not really for home use.

Objectives:

  • Deploy a media stack of Dockers. I want Radarr to be able to hard-link files from the download client. To elaborate: Radarr sends the request of what’s to be downloaded to deluge. Deluge downloads it to a directory (NFS share?) named Downloads. Deluge extracts it in-place. Radarr sees the the downloaded file and renames/hard-links it to a plex watched media directory. I could not get hard-linking in unRaid to work to save my life. It would always copy the file creating duplicates.
  • I want to use ZFS. I want alerts/monitoring of drive status. Would prefer to manage with a GUI.
  • Avoid using a SLOG (already bought a 900p), but I will use it if 100% necessary.
  • Run some needed VM such as GNS3, Cisco ISE, Firepower, PiHole(maybe Docker or container?).
Option 1:

ESXi with OMV/Napp-it VM for storage. RancherOS VM for Dockers. Not FreeNAS. I tried this first. FreeNAS crashed and burned with my RAID controller, even in JBOD. Have not tried OMV/Napp-it yet.

Pros:

  • I know it (VMWare)
  • More compatibility with VMs I use.
  • GUI for ZFS management/alerts/status.
Cons:
  • Would need SLOG for NFS performance. Datastore is mounted via NFS.
  • Dockers would have to run in a VM. Possible performance issue?
  • OMV ZFS support would be a plug-in, not “native” support. Not sure about Napp-it.

Option 2:

Proxmox with Docker/portainer running in parallel. Check_MK container for ZFS monitoring.

Pros:

  • Can manage ZFS locally and natively.
  • VM storage is “local” to ZFS. Meaning datastore is not NFS.
  • Very easy to deploy new Dockers with local storage. Again, no NFS share.
  • Can get away with not having a SLOG from performance perspective, but not data safety. Don’t really care about VM data safety, I can just rollback a snapshot.
  • Bare-metal Docker performance.
Cons:
  • Unsafe for production?
  • Docker is out-of-band. Meaning have to run two separate GUI to manage/monitor.
  • No GUI for ZFS management. Must use Check_MK for monitoring and alerts.
Option 3:

Same as option 1, but use Proxmox instead of ESXi. This would allow Proxmox to natively manage the ZFS pool, while not needing to share the pool back to the hypervisor via NFS. Basically, avoid using the SLOG. Otherwise, same pros/cons.

Questions:

  • Any further considerations?
  • How the heck do I manage shares to achieve the hard-liking I want? This part has me the most confused. I’ve got 30+ tabs open and it’s only getting worse :’(
  • How does Auto-ZFS snapshot work? What is snapshotted and what isn’t? Admittedly, I haven’t researched this yet.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
about Slog
Does not matter the filesystem or if you use iSCSI, NFS, SMB oder access the fs directly. Whenever you want performance, you must use caches. In case of ZFS you use rambased read/writecaches. For writes this writecache holds up to 5s of writes (Solaris) or has a certain size (10% Ram, max 4GB as default) on Open-ZFS. On a crash the content of the write cache is lost. While this does not affect ZFS consistency due CopyOnWrite a database or guest filesystem may become corrupted. With an older filesystem you can use a hardware raidcontroller with BBU/Flash protection to protect the write cache. With ZFS you can enable sync write to protect the write cache. To improve sync write performance you can use a dedicated Slog. Without Slog this protection is done onpool on a ZIL named device. If you do not need or want this extra security just set sync to disable (all ZFS platforms, does not matter if you want NFS, iSCSI or SMB)

ZFS snaps
ZFS is CopyOnWrite what means that you never overwrite/modify a datablock but always write it newly so the former state remains intact. On success the former datablock ist available for further writes. A snap in this situation is no more than a protection of the former datablock (no copy or delta file like on ESXi involved). This is why ZFS can hold ten thousands of snaps and create them nearly without delay.

about napp-it
Napp-it is a management environment for a default OS setup of Oracle Solaris (native, genuine ZFS) and the Solaris forks OmniOS and OpenIndiana (Open-ZFS) with a ready to use ESXi storage VM. Like all Solarish this is ZFS only, not an addition.
 
Last edited:

Ahira

New Member
Aug 27, 2018
3
0
1
Thanks everyone for the replies and votes. Looks like I'll be rolling with option 3, at least as a test. Couple of quick questions/updates. Due to heat and noise, I'm not able to adequately cool the 1.2TB 10k SAS drives. So I'll be using Intel SSD DC2500 480GB.
1. They are being reported as 4k physical, and 512b logical. I'm not sure how/if this affects ZFS? Should I be changing anything?
2. Is OMV reliable for ZFS? I'll be running with sync disable as I'm not worried about restoring a vm if it crashes. I'll just restore the most recent snapshot.
3. Regardless of which solution I use, what is optimal setup for the disk? 1 Raidz2 vdev of all 8 drives? Or 2 mirrored stripes? Usage will be mostly media files. A few VM, but nothing really performance critical except perhaps (most likley not) a OPNsense VM.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
245
43
re 1: You'll get the best performance with ashift=12 on 4k drives. It's generally advised to create pools with ashift=12 if you think you'll be adding 4k sector drives to them during their lifetime. You could use ashift=9 to reclaim some space, if that's important to you.

See here for more information: ZFS: Performance and capacity impact of ashift=9 on 4K sector drives

Also, I'm curious to see what you end up using for hardlinks: I've been meaning to set something similar up for quite some time. My first thought was hardlinks as well but I haven't investigated any of the post-download processing tools that are out there.
 

Ahira

New Member
Aug 27, 2018
3
0
1
Thanks for the reply Argle. I've indeed set it to ashift=12. Well, honestly looks like it was default. I got hard--linking to work! I used swizzen to deploy a seedbox. Mounted a NFS share from proxmox as "/mnt/media". Works great.