Proxmox VE Build Questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Free_Norway

New Member
Feb 9, 2017
15
1
3
Hi
I'am new to the forum and hope to find some answers/help

For years i have used various Windows version with a hw raid controller as my home server/nas.

I have now played a little bit with Proxmox and i really like it->having tried most Linux/FreeBSD NAS OS'es
The last setup i played with was:
  • Proxmox on a single new 850 Pro SSD, ZFS
  • Windows Guest with GPU PCIE and USB passthrough->used as media player/server(high res bluray rips/4k..)
What i have got:
  • 8x 4TB SATA disks for storage->5 of them are use right now in my Windows HW raid Raid 6 setup
  • 1x 850Pro and 1x 850Evo SSD 256GB
  • 1x 2TB + 1x 1,5TB + 1TB older SATA disks
and what i want to achieve:
  • Proxmox on mirrored SSD zpool
  • 8x 4TB drives in ZFS Raid Z1/Z2 ->first migrate data from HW Raid to another server
  • 2-5 VM's for Windows/NAS OS/pfsense/other OS experiments
My biggest problem is the lack of experience with CLI administration of ZFS.
I would like a easy to use ZFS GUI tool(coming from Windows).
Was thinking about Webmin ontop of Proxmox, but most threads i have found argue that its not the best solution.

Any ideas or advise?
 
Last edited:
  • Like
Reactions: vhypervisor

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Free_Norway we are still using a Proxmox cluster for web hosting. It works very well. We are using both ZFS mirrors and a small Ceph cluster.

Here is a guide to the Proxmox ZFS CLI side. It is basically standard ZFS commands so it is very simple to use (and look something up if there is an issue.)
Add a mirrored zpool to Proxmox VE

Getting a bit fancier, you can setup a ZFS sync to an offsite location using this: Automating Proxmox ZFS backups with pve-zsync

If you look up the commands you need, it should be a 5-minute process at most to get it working with your Proxmox host. Once it is being used by your Proxmox host you are going to use mostly the web interface.
 

Free_Norway

New Member
Feb 9, 2017
15
1
3
Hi Patrick

Thanks for the reply.

I have used that guide when i played around with Proxmox and a ZFS Raidz pool with 4 drives. I got it working without any hickups, but iam not shure i would know what to do if something would happen.
Does Proxmox show status information about the ZFS pool?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Proxmox can send you alerts when something goes wrong. For example, when I had a Ceph node overheat and fall out of the array, I got this beauty of an e-mail:
Code:
/etc/cron.daily/logrotate:
cat: /var/run/ceph/osd.6.pid: No such file or directory
cat: /var/run/ceph/osd.8.pid: No such file or directory
From there, I was able to find what was the issue.

My advice, setup a test VM and practice failing virtual disks. My sense is that within an hour or two you will feel comfortable troubleshooting ZFS. Proxmox ZoL commands are very easy to research via Google.
 
Last edited:

sno.cn

Active Member
Sep 23, 2016
211
75
28
Managing ZFS with the CLI is super easy, and it does exactly what you tell it to the first time, which may or may not be the case with a graphical utility. In the past, whenever I've used a GUI to manage ZFS, I've gone in afterwards with the CLI anyway to make sure my pool was configured correctly.

Super easy to get ZFS going:
  • Install and configure Proxmox. If you want to install Proxmox on ZFS, just use the built-in wizard.
  • Follow this guide to add a ZFS pool.
  • That guide is for a mirror. If you want some other configuration, it'll be super easy to find with Google.
  • Set compression on your new pool with 'zpool set compression=lz4 myZfsPool'
  • Create a filesystem on your new pool if you want to. Mine are like 'zfs create myZfsPool/iso' or 'zfs create myZfsPool/media' or 'zfs create myZfsPool/backup' or whatever.
  • Change the mountpoint for your zfs filesystem. Most of the guides tell you to mount to /mnt and then bind that to /export, but I just mount to /export. I mean you can also mount it to /boobies if you want, it doesn't really matter. So if you made the filesystem iso in your zfs pool, you could use 'zfs set mountpoint=/export/iso myZfsPool/iso' to mount your iso filesystem to /export directory.
  • Follow the guide I linked above if you need Proxmox to use your zfs pool. Or just read the Proxmox wiki.
  • Make an LXC container, bind your mounted ZFS filesystem to it, and then use the container to share it on your network. This way you can isolate permissions, and easily reset if you **** something up.
Just get in there and mess around. Follow the guides, mess around some more, and learn, and then do it correctly. You'll figure it out. With your 8 x HDD setup, I would rather use RAID 10 (striped across 4 mirrors), instead of any raidz configuration. There are arguments on either side of this, but I don't run parity RAID on anything, at home or in production environments, but again that's just my preference.

Depending on your use case, you may or may not need to add slog and/or cache. I almost always don't use either.
 
  • Like
Reactions: K D and T_Minus

Free_Norway

New Member
Feb 9, 2017
15
1
3
Thanks for the reply sno.cn

some questions:
  • what does the "bind to /export" do?
  • when google zfs topics, is it important that is about ZoL or wil FreBSD/Oracle info be the same?
  • why would you prefer to use a LXC container over a e.g. KVM virtualization
    I ask because I have never used LXC
  • I know there is a lot of discussion about Raid1/10 vs Raidz, but my idea was to have most space possible with some protection against drive failure. Things that will be stored there are replaceable(BD rips/CD rips/RAW pictures...)
    Important data is store at least on 1 or 2 other media/places
    The server will be idle/low demand for the most part of the day(when we are at work/sleeping), so the disks are not strained
  • How important is the optimal amount of disks for Raidz?
    ->I think I found the answer to this here
    How I Learned to Stop Worrying and Love RAIDZ | Delphix

Thanks for all the input
 
Last edited:

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
I think the bind mention is to bind-mount the filesystem into the container. It makes it look like a normal local filesystem inside the container, which is nice.

LXC and other containers are more efficient as they don't need to virtualize the whole system. They share the kernel from the host, it's a bit like a really secure chroot, BSD Jail, or Solaris Zone. The other nice thing is that you don't have to push everything through the network stack, you can use tricks like the bind mounts instead. Some things don't work so well in containers, so KVM is nice to have for those. I use containers on my Proxmox host whenever possible just to keep overhead down. They also start up faster as they don't have to boot a kernel, probe hardware, etc..

All the ZFS systems share commands. There are a few things that work less well on Linux than Solaris though. Stuff like the share settings to create network accessible shares tend to not work as well. The core stuff like creating pools/filesystems, scrubs, send/recv, permissions, all work the same. My preference is one large pool, with filesystems for each type of data. Big stuff like ISO images, personal files, backup data, etc all have their own filesystem. This lets me set things like compression to match the data type. No point in having compression enabled on movies, but it does help on documents. It also helps when doing things like send/recv. I can do it more frequently for documents, where the near unchanging stuff can be done weekly or similar.

Array type is all about tradeoffs. You use less space for redundancy with raidz, but expansion is more involved and replacing drives is slower. Performance is also lower, but that might not matter for your needs. I prefer mirrors as with 10Gb networking the performance difference is dramatic. Particularly for random I/O, which is almost all I/O on a server as there are other clients, background processes, VMs/Containers, etc.. I also like that I can expand the array by adding or replacing 2 drives, rather than however many I used for raidz. And repairs are significantly faster, replacing a failing drive takes an hour or two, vs a day or so for larger raidz drives. But those are my reasons and needs, yours may well be different. Whatever type you use, backup the important data. :)
 
  • Like
Reactions: sno.cn and T_Minus

Free_Norway

New Member
Feb 9, 2017
15
1
3
A little update:
I have messed around with around 6-8 Proxmox installations and tried out many of the build-in features->overall I like it :)
Since the start I have changed to a new CPU/motherboard combo and have struggled to repeat my good KVM Win 10 tests.
Since I only have one server chassis I have tried to virtualize my Windows installation with a PCIE pass-through of the raid card.
None of the configurations of setting up the pass-through that I found on various sites worked.
2 configs I tried work on boot(the raid card bios showed up and worked) but Proxmox crashed when Windows booted with a fatal hardware error.
I got another config working(hostpci0: 0x:00.0) but the speed was horrible->I suppose this config is not realy PCIE?
The next thing I really can't figure out is how to get a LXC container up and running with a easy to use interface to mange SMB network shares.
Does somebody have any ideas, tips?

Regards
Sebastian
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Free_Norway you may think this is a crazy idea, but why not try simply using Proxmox for storage? That works very well.
 

Free_Norway

New Member
Feb 9, 2017
15
1
3
I have actually thought of that, but one of the things that got me trying Proxmox where the virtualization features.
I have many boxes that I would like to virtualize->pfsense firewall, homeautomation setup/server, mediaserver....
But maybe I have to reevaluate what is possible to accomplish with a virtualization platform like Proxmox right now.
I would welcome some more advanced KVM features in the Proxmox GUI, but maybe that is not the intend of the distro.

Sebastian
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Features like PCI passthrough are not as stable in Proxmox/KVM as they are in esxi. So they are harder to set up and such, as you discovered.

Running a Windows VM for storage on a Proxmox host seems like a very odd choice. It's like having all the downsides of Windows and all the downsides of Linux at the same time. If the only reason for Windows is SMB configuration, you might consider something like WebMin to manage shares. My needs are pretty simple for sharing, so I just use command line tools to configure it.

I think the best use case for Proxmox is with the storage handled natively. I also manage sharing natively in the host, which has pros/cons. You could also set up a container to do the network sharing and bind mount the local filesystems in to it. For media serving, and other basic automation tasks it's performed well for me.

One thing I don't like the idea of is running the firewall on my main server. I want the air-gap. One less layer of complexity means one less layer an attacker can exploit. pfSense doesn't need a lot of hardware, so an old/cheap/low power dedicated box is the way to go there in my opinion.
 
  • Like
Reactions: T_Minus

Free_Norway

New Member
Feb 9, 2017
15
1
3
Sorry, didn't explain good enough!
The Windows KVM guest with pcie pass-through was an attempt to get all data on the windows install moved without having another machine->I will do this now by moving the disks and raid-controller to another machine I can borrow.

On the firewall part:
I have a dedicated box with low-power cpu for this->virtualizing was an idea to have one less box/use less power, but I will reconsider this.
I only use it at home for easily root all my outgoing traffic through multiple VPN connections.

I will try Proxmox for a couple more configs with native storage.

Any tips on where to find good information about the use of containers you described?
 

vl1969

Active Member
Feb 5, 2014
634
76
28
that's funny, I am in process of planning exact same setup as yours. maybe a little lower specs, but close enough.

so if I read all the reponses properly,
you and I should , use ZFS raid1 or 10 , not raidz , for the storage setup.
since we both have single box setup , we should design the storage localy and use LXC container to essentially emulate the shared storage using local disk array that are not part of Proxmox local storage setup.

I am still confused on how to use all mismatched disks properly?
cannot build pool on them as ZFS requires same size drives in a pool, so, will we loose all those drives?
I have 3x3TB disks , 2x1TB and 4 or 5 2TB disk in my setup.
I will use 1 Intel 120GB SDD and one Samsung 850EVO 120GB for ZFS raid-1 Proxmox setup.
I am thinking on using 2x1TB ZFS raid-1 pool for Proxmox Local storage for all the VM related things
ISO, Templates etc. I guess I have enough space on System SSDs to put running VMs there
but I also currently have all other disk filled with data (media etc.)
how can I use all the rest of the drives to store and share the actual data form them?
 

Free_Norway

New Member
Feb 9, 2017
15
1
3
Hi vl1969
for my setup raid1 or raid 10 is no option, loosing half of the disks for redundancy makes no sense for mediafiles/non critical data.
I will in the end not use different sized disks, the setup get
I'm playing with to options:
- borrow a NAS/4 disk pool to temporarily store the data from the 4 drives I use on the raid controller
- create a "fake 8 disk" raidz2 by portioning the 4x 4 TB disks and 1x 2TB disk and then change disks(partitions) one by one.
I play with such a setup right now, to train the commands/what to do.
One thing I cant figure out is how to share the raidz2 pool over the network->really don't understand how to use LXC containers and couldn't find any newbie friendly guides.
I will maybe trying a kvm install of OMV or the likes to do the sharing and other stuff. I'm shure that's not the most elegant solution, but for the time being it may work.
 

Free_Norway

New Member
Feb 9, 2017
15
1
3
Oh I forgot to ask something about ZFS:
when I try failing disks(deleting a partition that simulates a disk) I can't se any indication of a failed drive in Proxmox.
Is that right? Have I missed something?
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
cannot build pool on them as ZFS requires same size drives in a pool, so, will we loose all those drives?
That's not true. You can have different size disks in the pool. You can have different size disks in a vdev as well, though you do lose some space that way. A pool is a group of vdevs. A vdev is a group of drives, though it can also be a single drive. Redundancy is on the vdev level. So if you make 2 mirror vdevs, you can lose one drive from each mirror without any data loss in the pool. In RAID terms, the pool is a RAID0.

I have 3x3TB disks , 2x1TB and 4 or 5 2TB disk in my setup.
The best setup I can see with that group of drives (mirror/raid10) is this..

2x3TB - mirror
2x1TB - mirror
2x2TB - mirror
2x2TB - mirror

Optionally, you could also add the remaining 3TB and 2TB in a mirror to that pool. That vdev will only have 2TB usable space though. You could hold them as spares, or add them to other mirrors as a 3-way mirror.

If you want raidz, put the 5 2TB and 3 3TB together for an 8 disk raidz2. The system will see the 3TB drives as 2TB, so you will have 8 x 2TB drives in practice, with 2 drives worth of space for the parity. So 12TB usable. You get 8TB (or 10TB if you use the odd drives for another mirror) with the mirror config.

Having some larger drives in a vdev is nice as it's an easy way to add capacity later. Replace drives that are dying with larger versions one at a time. Once they are all replaced, the pool will auto-size itself to use the newly available space.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Oh I forgot to ask something about ZFS:
when I try failing disks(deleting a partition that simulates a disk) I can't se any indication of a failed drive in Proxmox.
Is that right? Have I missed something?

Deleting the partition might not make a good test as ZFS might well read the sector, see that the data it wants is there and valid, and just keep going. If you want to test failing devices, the best option might be a Virtualbox setup where you remove a virtual drive while the system is running. Or using hardware pull a SATA plug.