Proxmox downsides?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I'm considering converting from FreeNAS to Proxmox for my home server. I'd like it to act more like a home lab AIO and FreeNAS just doesn't go there. I'm just wondering if anyone has any issues with it that bug them before I go there.

My current uses are mostly as a filer, but with a few services running in jails. I'd also like to use Crashplan as a backup solution, but it's just not stable on FreeBSD. Works for a while, then stops (usually an auto-upgrade breaks it), doesn't come up on reboot sometimes, etc.. Linux is a supported platform, so I'm thinking that Proxmox seems like a good solution. Just about everything I do would work in Linux, so it would be mostly containers running.

I know about BeHyve, but it doesn't boot Linux on older hardware. And I just upgraded, so that's out.

I've considered a VMWare based setup. But I'd like to stick to open platforms if possible. And I like the idea of lower overhead of using containers/jails/chroot/whateverYouCallThem for most services. I could roll my own, but it seems like I'd end up basically the same as Proxmox, without the admin tools. :)
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Proxmox works very well. I don't know of any particular technical downsides to it.

Their networking model works for the basics but is nowhere near as flexible as ESX or even HyperV.

They support PCI passthrough (for your filer disks if you want to do an AIO). They also support passthrough of NICs if you need high performance network IO. There is some support for SR-IOV but I've not tried it.

Thought they are a bit better than they used to be, the biggest downside remains their people (at least the ones they have providing front-line support and participating on their forums). They have an arrogance about them that will put you over-the-edge if you need help. They have a "we know better than anyone else" attitude. And from time to time they do stupid thing and just don't seem to care (see a recent thread on their forums about removing support for tagging on VLAN 1 - pulled support in a point release, didn't announce it, and their rationale is built on a ridiculous nanny'ish "somebody might have problems if they do this" argument).

For a single-host virtual solution it is great. For multi-host clusters they have a minimum of 3 due to their quorum model (as opposed to ESX or Hyper-V which are happy with 2-node clusters).
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I don't expect to use passthrough or SR-IOV, though it's nice to know they are there. It supports ZFS directly, so I don't see much reason to do the whole storage VM thing. Is there some reason to avoid this setup? The host OS running the storage seems reasonable to me, but perhaps I'm missing something. It also means I don't have to run every access to the main storage through network connections, virtual or otherwise.

It does mean I have to reconfigure things like network shares for the new OS. That's mildly annoying, but it doesn't take much to fix it. I only share out a few mount points.

I saw that VLAN1 post on their forums. Not a great way to handle it, IMO. I'll probably post here more than there.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
If you have enough memory on the host I see no reason not to run ZFS on it for the filer. Just be careful to set the ARC max size or it will suck the host dry of memory under heavy disk IO load. I've run systems that way with no problem whatsoever.

I do think the packaged NAS products - both commercial (Synology, etc) and FOSS (FreeNAS, napp-it, etc) offer some handy management tooling that is really nice to have. But if you are comfortable managing ZFS "by hand" then your approach works great with Proxmox.

After many iterations I'm currently running cluster of 5 Proxmox hosts using Ceph for VM storage (2 SSDs for OSD on each host) and then running a separate filer running FreeNAS for bulk storage and VM backups. This works out well because I can power-down any 2 nodes of the cluster at a time for maintenance, upgrades, or just because, without ever losing service. The filer is single-hosted but I'm careful not to let any running services depend on it so I can maintain it at will. But it is also a rather power-hungry config and relies on a 10Gbe backbone.
 
  • Like
Reactions: T_Minus and Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
Just adding here, the storage integration with Proxmox is a great feature. Single node Proxmox works well. You may want to look into OVS on the networking side. You can also put FreeNAS in a VM if you are feeling really adventurous.

Cluster wise, my first cluster was a two node cluster simply for failover and that is still running years later. The STH hosting VMs use a mix of ceph and ZFS but I went from a 3 node cluster up to a 7 node cluster which Proxmox seems to work much better with. That echos @PigLover's move to a 5 node cluster.

I would also suggest, given current 10Gb pricing, looking at 10Gb if you do decide to think about Ceph/ multi-node clusters.
 
  • Like
Reactions: T_Minus

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Sure, I always set the arc_max. I don't like the way ZFS handles (or doesn't) freeing the arc RAM, so I just consider a chunk dedicated to ARC. It seems to work well for me.

I've historically run ZFS on Solaris using only SSH for management, so the CLI tools are no problem. I like a couple of the things FreeNAS makes easy, like timed scrubs/replication, but I did all that in cron before, I can do it again. :)

If I were to do the cluster thing, 10Gb would be the way to go, but switches are still insanely priced, IMO. Perhaps not if I were building 5+ node clusters, but for my uses on a single server for my home needs, it's just not there. Once I get the new fiber run installed, I'll have one workstation on 10Gb, that's good for now. :)
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I did run into one little thing playing around with a test install. Installing to a ZFS mirror fails if the drives aren't identical. Before I go get flamed over there, anyone here know a way around it? Yes, the total size would be the smaller of the two, but it's a boot drive, so what? I can attach it and resliver after the fact I suppose. Need to look up installing grub manually to the mirror, been a while since I did manual grub installs...
 

Danic

Member
Feb 6, 2015
84
35
18
jrdm.us
I did run into one little thing playing around with a test install. Installing to a ZFS mirror fails if the drives aren't identical. Before I go get flamed over there, anyone here know a way around it? Yes, the total size would be the smaller of the two, but it's a boot drive, so what? I can attach it and resliver after the fact I suppose. Need to look up installing grub manually to the mirror, been a while since I did manual grub installs...
According to this wiki you can install proxmox on Debian. I bet the Debian installer/live environment is more forgiving than Proxmox installer. Then creating a ZFS mirror with different size drives should just need the '-f' for force.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
No, the Debian installer won't fix this issue. The Debian installer won't install to boot from ZFS at all - you'd have to hack that in after the install to ext4. AFAIK, Proxmox installer is currently the only Linux installer that will directly install to boot from ZoL.

Sent from my SM-G925V using Tapatalk
 
  • Like
Reactions: Patrick

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
That's what I ended up doing. Install to ZFS single drive, add the mirror. Then use grub-install to get the bootloader installed. It would be nice if they would warn you that it's a non-optimal setup and let you do it anyway, but owell.

Other than wasted space on the larger drive, I don't see any reason not to do it.