Proxmox All in One Options Revisted

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I know we've had some threads on this in the past but anyone got a Proxmox all in one setup they're liking that easily manages network shares, etc?

Getting ready to take down my home AIO and would prefer to go to Proxmox so I don't need to pass-through a HBA which would save me ~11w not needing the HBA itself at all then, and also save me the $180/year in VMUG license too.

I've got no problem with proxmox/aio where it's more of a "all in one" server, not needing tons of shares for windows, folders, etc... but for "home aio" I'd like easier/faster management of the permissions for users and folders/dir shares too.

What containers or VMs are people using now that have worked for last few months or are trying out now?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
I have one more FreeNAS box that I am going to switch over. I am just going to CLI the user permissions. Easy to do for home usage.
 

Monoman

Active Member
Oct 16, 2013
410
160
43
T_Minus

What sharing services do you need? NFS and SMB are covered by ZFS native (NFS) and SMB from the TKL container.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I should have clarified, when I said 'easier/faster management' I meant I wanted a GUI for handling users/access as well as ZFS file systems / permissions management to file system.
 
  • Like
Reactions: Monoman

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Another option is to use owncloud or similar for managing user shares. Then use native ZFS for infrastructure
 
  • Like
Reactions: T_Minus

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
A GUI for a ZFS appliance on Linux is a pain.
I offered napp-it for OpenSolaris, NexentaCore (with Debian packaging), Openiniana 151a (OpenSolaris Fork), OmniOS and then OI Hipster (the last two based on Illumos) without greater troubles. This is because every Solarish - even the minimalistics like Solaris Text, OmniOS or OI minimal include everything you need for a ZFS storage appliance like ZFS or NFS (invented by Sun) or network virtualisation (Crossbow), FC/iSCSI (Comstar) and the kernelbased SMB (all Sun projects).

These services are included and nearly the same in every Solaris or Solaris Fork distribution and they are completely maintained either by Oracle or Illumos for the free forks. I have never seen a distribution without them or problems with them on updates.

On Linux, every distribution behaves different or does not care about core storage services. Even a minimal update can break everything or behaves completely different.

This is why, if at all, a reliable and continously maintained ZFS webmanaged storage appliance would only possible with a dedicated freeze of a Linux distribution not for Linux in general (like Synology or Qnap)
 
Last edited:
  • Like
Reactions: T_Minus

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
For me Xpenology is the best proof of the problem.
When you buy a Synology, they guarantee for working updates on their hardware with all the extras that are not part of a minimal Linux.

If you use the "free" fork Xpenology, you are lost quite often.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I've wanted to do owncloud for a long time now maybe it's time!

@Monoman still unsure about xpenology in general, or rather if I trust it to not lock me out of my own data at some point in time. I had no problems with the actual hardware/software from my Synology / old NAS either.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
Synology=Linux +

- mostly cheap desktop hardware without ECC
- mdadm + ext4 raid or
- mdm + btrfs

Only advantage is the expandability disk by disk and many home apps,
no selfhealing, far away from the ZFS state or the art storage features
 

nk215

Active Member
Oct 6, 2015
412
143
43
50
Synology with btrfs can self-heal data as long as one does not use SSD for cache and the volume is not degraded.

The higher end units do have ECC ram.

Since Synology does not use as much memory as ZFS units, the chance of data get corrupted in memory is much lower. A higher end Synology works great with 4-8gb.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
1.
Synology uses BtrFS on LVM on MDADM
From BtrFS view, the array is a single disk with two sets of metadata

So the Raid can repair parity errors (not aware of checksums) and btrfs
can detect checksums errors but repair only when metadata is the problem
and otherwise report the problem as it has no access to data redundancy.

It seems that DSM has improved to trigger a raid rebuild to auto repair
under special conditions. Not as effective than btrs/ZFS raid but a step forward
and far better than ext4 raid.

2.
This is not a feature but only to save money.
HP offers a server grade 4bay NAS hardware for 200 Euro with ECC

3.
This is not a feature but only to save money

Any modern 64bit OS, does not matter if Linux (ex Synology), Unix (BSD, OSX, Solarish) or Windows want 1-2 GB RAM. You can ignore the minimal extra requirement for checksumming with btrfs, ReFS or ZFS. If you have no more RAM, every read/write must processed by the disk or raid only. Not a problem if one person views a video but bad on concurrent read/write with many users or processes. A cache from SSD can help a little but is 100x slower than RAM.

ZFS includes rambased cache mechanism that are part of ZFS. If RAM is not otherwise requested ZFS use up to 4GB RAM as writecache to transform small random slow writes to a large fast sequential raid with a Zil/Slog to protect the cache and use the rest as readcache for matadata and small random reads. You can extend with an SSD cache with read ahead but the key for performance is RAM. On a well designed ZFS system >80% of all reads are from RAM. Count the RAM on a 16GB ZFS system like 2GB for OS, 4 GB for writecache and 10G for readcache. When you know that around 1% of your data is metadata that you want to cache at least, you want the readcache much larger than 1% of active data (not poolsize)

This is why you want RAM on ZFS. A traditional hardware or MDADM raid can use some OS cache mechanism or a hardware raid cache to improve data security or performance but ZFS playes in a complete different league.

The main advantage of Synology is not state of the art storage for production use with VMs, databases or many users but its feature set/services for home use.

Back to this thread, Synology is not a solution to replace the integrated ZFS in Proxmox security-, -feature or performancewise.
 
Last edited:
  • Like
Reactions: T_Minus and Patrick

acmcool

Banned
Jun 23, 2015
610
76
28
40
Woodbury,MN
I've wanted to do owncloud for a long time now maybe it's time!

@Monoman still unsure about xpenology in general, or rather if I trust it to not lock me out of my own data at some point in time. I had no problems with the actual hardware/software from my Synology / old NAS either.
I believe you can access the array in Debian/Ubuntu...
Only issue would be losing the data due to some software glitch..
I am in the same boat..Trying to decide on proxmox and AIO
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
Another option is to use owncloud or similar for managing user shares. Then use native ZFS for infrastructure
Does owncloud do anything regarding data structure/organization that would prevent accessing the data 'raw' directly on Proxmox host or from other guests? What about if owncloud went down/crashed could it theoretically destroy all day? I'm assuming answer here is 'yes' but if ownCloud itself (VM) died the data is still accessible (not via network/shares) on host machine right?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
@T_Minus - your thread here has morphed all over the place. It is kind of a fun birdwalk to read through.

I don't think Owncloud/Nextcloud is going to do what you want. It is not a NAS management platform, but rather more of a storage upload/download & collaboration tool. While it is interesting when the application fits, in this case it is not going to be much help.

If I do understand your target application, what you want is a virtualization environment that has a full-service NAS capability as part of the virtualization host OS rather than a more "vanilla" virtualization host (like ESXi) that requires you to host your NAS as a VM, which implies a pass-through HBA requirement for performance and to give ZFS direct access to the disks. From your first post it is this pass-through HBA that you are trying to avoid.

This is a shared quest...AFAIK, there are only a few reasonable options:
  • Proxmox. Upside is solid virtualization environment based on KVM, reasonable community support on their forums with active participation from the developers (who have recently figured out that insulting their user base on the forums is a bad idea). Downside is that there is no known management GUI for sharing as a NAS. You are pretty much down-and-dirty with the CLI. Note: this is the path I have chosen.
  • FreeNAS. This is a story of frustration, with the "Corral" debacle still fresh in many peoples minds. While some may not like it, the NAS management tools of FreeNAS are quite good - excellent even. The promise of a virtualization management toolbox attached to FreeNAS was the major draw to Corral, and the heart of the letdown when they pulled it back. With FreeNAS 11.1's imminent release there is still hope here. But few people who committed to it and then had the rug pulled out with Corral are likely to try.
  • Napp-It. Napp-It provides a really good NAS management tool suite for Solarish OSs (Solaris, Illuminos, etc.). There is good support for KVM-based virtualization here. But if you want the NAS as the VM-host you have the opposite problem from Proxmox. You'll be managing the VMs from the CLI (or an add-in KVM management suite). Choose your poison...
The other options people have suggested go downhill fast. For example, Xpenology could be fun, and might do what you want if you can limit your VM needs to those that can be hosted in Docker. But you'd tear your hair out pretty quickly trying to keep up with Xpenology's "quirks". Etc, etc. There are even variants using Hyper-V and Windows disk sharing - but i don't recommend going there anymore.

You could also go back to hybrids - FreeNAS or Napp-It under Proxmox or ESXi, but this is what you were trying to avoid.

Personally, after going after this nine ways to Sunday, I've just settled in on Proxmox + Docker/Portainer + CLI-based NAS services. Its really not that hard to manage the file sharing without the pretty GUI.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
could not something like webmin help a bit when it comes to nfs or smb?
I'm not a huge fan of it, never really used it but this might be a valid case for it.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
@T_Minus - your thread here has morphed all over the place. It is kind of a fun birdwalk to read through.

I don't think Owncloud/Nextcloud is going to do what you want. It is not a NAS management platform, but rather more of a storage upload/download & collaboration tool. While it is interesting when the application fits, in this case it is not going to be much help.

If I do understand your target application, what you want is a virtualization environment that has a full-service NAS capability as part of the virtualization host OS rather than a more "vanilla" virtualization host (like ESXi) that requires you to host your NAS as a VM, which implies a pass-through HBA requirement for performance and to give ZFS direct access to the disks. From your first post it is this pass-through HBA that you are trying to avoid.

This is a shared quest...AFAIK, there are only a few reasonable options:
  • Proxmox. Upside is solid virtualization environment based on KVM, reasonable community support on their forums with active participation from the developers (who have recently figured out that insulting their user base on the forums is a bad idea). Downside is that there is no known management GUI for sharing as a NAS. You are pretty much down-and-dirty with the CLI. Note: this is the path I have chosen.
  • FreeNAS. This is a story of frustration, with the "Corral" debacle still fresh in many peoples minds. While some may not like it, the NAS management tools of FreeNAS are quite good - excellent even. The promise of a virtualization management toolbox attached to FreeNAS was the major draw to Corral, and the heart of the letdown when they pulled it back. With FreeNAS 11.1's imminent release there is still hope here. But few people who committed to it and then had the rug pulled out with Corral are likely to try.
  • Napp-It. Napp-It provides a really good NAS management tool suite for Solarish OSs (Solaris, Illuminos, etc.). There is good support for KVM-based virtualization here. But if you want the NAS as the VM-host you have the opposite problem from Proxmox. You'll be managing the VMs from the CLI (or an add-in KVM management suite). Choose your poison...
The other options people have suggested go downhill fast. For example, Xpenology could be fun, and might do what you want if you can limit your VM needs to those that can be hosted in Docker. But you'd tear your hair out pretty quickly trying to keep up with Xpenology's "quirks". Etc, etc. There are even variants using Hyper-V and Windows disk sharing - but i don't recommend going there anymore.

You could also go back to hybrids - FreeNAS or Napp-It under Proxmox or ESXi, but this is what you were trying to avoid.

Personally, after going after this nine ways to Sunday, I've just settled in on Proxmox + Docker/Portainer + CLI-based NAS services. Its really not that hard to manage the file sharing without the pretty GUI.
That's a great over view.

ESXI is only in use on my home AIO one now, so beside power waste\more parts with HBA the ESXI license for the home aio/lab is still $180/200/year... so ncie to save $ on both. However, with that said it looks like I did a 2 year license last time I purchased so I have around 15 or 18 (forget) months let. Thus, I'll be sticking with ESXI + Napp-IT and passing through for now.

I'll continue using proxmox for work stuff and I can see just going to regular`ol proxmox in the future for home home AIO, and doing it via CLI once I'm in there more playing around.

If FreeNAS Corral worked/was around it would be perfect for NAS and my light virtualization requirements that's for sure.

I think I saw the new Napp-IT (just installed it) does some sort of virtualization too, that may be worth looking into for me although I'm not sure how much I want to get into relying on OmniOS going forward.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
I think I saw the new Napp-IT (just installed it) does some sort of virtualization too, that may be worth looking into for me although I'm not sure how much I want to get into relying on OmniOS going forward.
The corresponding distribution on Illumos that is similar to ESXi or Proxmox is SmartOS, a Cloud/VM OS owned by Samsung. It boots from an USB stick and runs from RAM, just like ESXi. It supports Solaris zones, Linux LX zones, KVM and Docker. But as SmartOS has limited global zone access using it as a storage server is not intended and would require some work. This is why SmartOS is currently not supported by napp-it.

OmniOS includes KVM and the LZ zones from SmartOS. Docker images may be usable after an export/import.
OpenIndiana another Illumos general use distribution lacks LX zones at the moment.
 
Last edited:
  • Like
Reactions: T_Minus