Proxmox VE Build Questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Free_Norway

New Member
Feb 9, 2017
15
1
3
What would happen if one would put 1x 2TB and 1x 1TB partition on the 3TB drives and make a 6 disk + 2 partition raidz2 and a 2 disk + 2 partition Raid1 or Raidz?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
What would happen if one would put 1x 2TB and 1x 1TB partition on the 3TB drives and make a 6 disk + 2 partition raidz2 and a 2 disk + 2 partition Raid1 or Raidz?

That can work. The problem you run into there is that if you lose one disk, you lose more than one virtual disk. If you plan carefully, so that a failure doesn't take out too much redundancy it can be ok. I wouldn't recommend it as it makes things far more complex with little gain. You are jumping through a lot of hoops to save a couple TBs. And when a failure happens, you need to remember it all so you can rebuild it. I like the simplicity of just working with whole drives. With raidz, also consider that rebuild times are large. A 6-disk raidz2 with 2TB drives would take over 12 hours to resliver a single drive. While that's happening, you are stressing the remaining drives. Add in using partitions to make virtual drives, potentially causing a lot of seeks and you create possible secondary failures. Those added seeks will also cause performance issues without a bad drive to deal with.

For the previous disk set I posted about, they had 1TB of unusable space. They could replace one drive and get that back. Just not worth it in my opinion. Your particular drive pile might be less well aligned. But over time drives will die or just get cheap enough that it doesn't make sense to keep 1TB drives in the main array.

In the first post, you mentioned having 8 x 4TB drives to work with. If that's true, definitely skip tricks to use the older smaller mismatched drives.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
zfs on partitions is imho not a good idea. But for quite a while maybe nothing will happen, until a disk starts to fail, what zfs might not be able to detect / correct as it has no physical access to the disk. Same problem like zfs on hw-raid.

but afaik you can mix different sized disks within a vdev, where on bigger disks only the size of the smallest disk in the same vdev is used. you could then upgrade the smaller hdd in the vdev later one-by-one and expand the usable size of the vdev and therefore the pool.

also, you can stripe 2 vdevs with different sized disks (smallest disk in each has different size). in this case you will certainly get a performance penalty but the pool-size will be vdev1 + vdev2.
 

Free_Norway

New Member
Feb 9, 2017
15
1
3
OK
Thanks for the quick answer.
When I check zpool status I get a degraded state of the raidz2 that I play with.
what kind of message/alarm would you get from pulling a sata cable?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
OK
Thanks for the quick answer.
When I check zpool status I get a degraded state of the raidz2 that I play with.
what kind of message/alarm would you get from pulling a sata cable?

The vdev should show degraded, with whichever drive you removed listed as "UNAVAIL".
 

sno.cn

Active Member
Sep 23, 2016
211
75
28
Sorry I wanted to ask if the message/alarm should show up in Proxmox?
You should get an email. As ashamed as I am to admit it, that's the only thing I'm using to monitor ZFS health right now, besides checking zpool status with SSH every now and then.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
ok, thanks for a tip.
now next question is how do I manage all this?
I understand that I can create and manage local storage pool via Proxmox
so the 2x1TB pool I can add to proxmox local storage and use/manage it from there.
but the rest I want to be a shared storage for all my DATA needs. so I create all the mirrored vdevs as described, create a single pool out of them, than what?
 

sno.cn

Active Member
Sep 23, 2016
211
75
28
It depends on how you want to share it out. What I do is bind mount whatever I want to share to an LXC container (I'm using Ubuntu), and then inside the container, install whatever you want to use to share it out. If I'm mostly sharing to Windows clients, I use samba. For almost everything else I use NFS.

So make an LCX container, and then go to the configuration file and add a mount point. In this example, I have an LXC container '101' so I edit the configuration file with "nano /etc/pve/lxc/101.conf" and add the following line:

Code:
mp0: /mnt/media,mp=/export/media
Now restart the container, log in through console or ssh, and you will see your new mount point. Now you can share it out however you like.

You can also do this directly from Proxmox, but I think using containers is a much cleaner way.

Now if you want to mount it directly into a VM, that's not going to happen.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
@sno.cn I'm liking that container method! Going to try/play around with that... would be nice to have a container 'ready to go' with all the sharing/config etc. setup so you can image it and re-load it in a fly. Do you use any other management layer in the container for sharing out to windows, etc...? Could you use something like FreeNAS or Napp-IT within the container to just manage the share(s) ?

I'm getting away from my ESXI + Napp-IT all in one and will be needing to manage shares differently going forward, this proxmox idea is interesting and may save time/resources.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I was going to do the container for sharing. What stopped me was a couple things combined.

1) Bind mounts do not allow you to traverse filesystems. So if you have a pool mounted at /raid and a bunch of ZFS filesystems under that, /raid/fs1, /raid/fs2, etc.. You can't bind /raid and access fs1, fs2, etc.. Annoying. I understand the reasons for it, but still annoying.

2) There was a limit of 10 bind mounts when I looked into it. 0-9. That could easily have changed by now. Combined with 1, that made my sharing stuff impossible to manage as I have a lot of filesystems. I like it that way so that I can manage the frequency and detail of backups and snapshots, as well as enabling compression for some, but not others.

If they have fixed 2 so you can have loads of bind mounts, a simple shell script can generate a big list of mount lines.

For now, I just back up smb.conf and /etc/exports.

I could have created network shares in the host, only for the sharing container, but then I have 2 sets of configuration to manage.
 

sno.cn

Active Member
Sep 23, 2016
211
75
28
@sno.cn I'm liking that container method! Going to try/play around with that... would be nice to have a container 'ready to go' with all the sharing/config etc. setup so you can image it and re-load it in a fly. Do you use any other management layer in the container for sharing out to windows, etc...? Could you use something like FreeNAS or Napp-IT within the container to just manage the share(s) ?

I'm getting away from my ESXI + Napp-IT all in one and will be needing to manage shares differently going forward, this proxmox idea is interesting and may save time/resources.
I'm just using Ubuntu containers, since FreeNAS is too heavy for my needs. For NFS, I just install nfs-kernel-server in the container, and use the exports file to manage shares. For sharing to Windows at home, I just use samba and make some shares with everything wide open. At work, I'm still using an Ubuntu container with samba, but someone else then uses Windows to manage the share permissions.


I was going to do the container for sharing. What stopped me was a couple things combined.

1) Bind mounts do not allow you to traverse filesystems. So if you have a pool mounted at /raid and a bunch of ZFS filesystems under that, /raid/fs1, /raid/fs2, etc.. You can't bind /raid and access fs1, fs2, etc.. Annoying. I understand the reasons for it, but still annoying.

2) There was a limit of 10 bind mounts when I looked into it. 0-9. That could easily have changed by now. Combined with 1, that made my sharing stuff impossible to manage as I have a lot of filesystems. I like it that way so that I can manage the frequency and detail of backups and snapshots, as well as enabling compression for some, but not others.

If they have fixed 2 so you can have loads of bind mounts, a simple shell script can generate a big list of mount lines.

For now, I just back up smb.conf and /etc/exports.

I could have created network shares in the host, only for the sharing container, but then I have 2 sets of configuration to manage.
These are valid points. I'm not using any nested filesystems, only top level, and each, in most cases, is mounted to a separate container since they're for completely different sharing purposes. And then nothing has access to the root pool, so I'd never need to mount it anyway.
 
  • Like
Reactions: T_Minus

Free_Norway

New Member
Feb 9, 2017
15
1
3
Hi

Thanks for all the replies.
In the meantime I have set up a OMV VM and test the sharing from there->samba sharing to Win10 doesn't seem to be that stable.
I pass-through the whole SATA controller and after installing the ZFS plugin I can see and configure the ZFS-pool.
But the ZFS-pool is still visible from Proxmox->have I set up the pass-through wrong?

A couple of questions about the replies:
  • Which sharing is best for Win10 only clients, Samba?
  • is there any LXC container with a graphical interface or do you have to do all of the sharing setup by command line?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
If you used PCI passthrough, Proxmox shouldn't be able to see the controller, let alone the drives attached to it. Other than perhaps a note when the kernel initializes that it's doing passthrough. You need to set up a fair bit of command line stuff for passthrough to work properly unless something changed in the past few months. Kernel command line stuff in grub.cfg. If proxmox is accessing the array while the VM is doing so, that could well explain the issues you're having.

For a Linux host serving files to Windows boxes, Samba is pretty much it. There are some NFS clients for Windows, but they don't seem well maintained.

I'm not aware of a pre-made LXC for samba. You might check the turnkey linux website, they have a number of pre-made images available. Or just install whatever distro you like, then use the package manager to install Webmin and configure things that way.
 

RedneckBob

New Member
Dec 5, 2016
9
1
3
120
@Free_Norway you may think this is a crazy idea, but why not try simply using Proxmox for storage? That works very well.
I've done just that. After having zfs send/receive issues with FreeNAS 10 I backed out and installed Proxmox and setup ZFS as my destination. Currently using pve-zsync (thanks for the tutorials) and thinking about moving to znapzend.

Licenses are cheap enough for Proxmox and I sleep better knowing all my ZFS systems are on the same level and patched.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
Circling back to this, got Proxmox running on my tiny-low powered backup storage server... zfs mirror for 2x120gb ssd, and added a zfs tripple-mirror for my backup pool. I was originally going to manage in a VM or container but now thinking I want to keep it as 'simple' as possible so trying to do shares on proxmox...

I've found a handful of Proxmox/Debian guides for NFS shares directly on host... I'm using Proxmox beta 5 (latest w.updates) and before I follow any 4.x guides -- is there a better/different way to manage/setup NFS shares "in" the proxmox gui or just follow command line 4.x instructions and be off, and going?
 

Free_Norway

New Member
Feb 9, 2017
15
1
3
A little update after some month, haven't had time lately.
  • Proxmox box up and running with a 8 disk RaidZ2 ZFS pool ->started with 5.0 Beta 1, now on 5.0
  • migrated al of my data(media, music, pictures...) to the ZFS pool
  • sharing all the data via Samba and zfs set sharesmb=on xxx ->working really well, impressive transfer speed
  • tried turnkey filesever/mediaserver but didn't get them running ->all the config possibilities are really confusing
  • one Win10 VM running all of my Windows stuff ->without PCIE pass-through now(maybe later)
all I running very well. I miss some features in Proxmox to administrate ZFS, but this is maybe something that comes with future releases.
Try to learn more about how to do cron scrubs/backups and stuff like that.

Stay tuned