What would happen if one would put 1x 2TB and 1x 1TB partition on the 3TB drives and make a 6 disk + 2 partition raidz2 and a 2 disk + 2 partition Raid1 or Raidz?
What would happen if one would put 1x 2TB and 1x 1TB partition on the 3TB drives and make a 6 disk + 2 partition raidz2 and a 2 disk + 2 partition Raid1 or Raidz?
OK
Thanks for the quick answer.
When I check zpool status I get a degraded state of the raidz2 that I play with.
what kind of message/alarm would you get from pulling a sata cable?
You should get an email. As ashamed as I am to admit it, that's the only thing I'm using to monitor ZFS health right now, besides checking zpool status with SSH every now and then.Sorry I wanted to ask if the message/alarm should show up in Proxmox?
mp0: /mnt/media,mp=/export/media
I'm just using Ubuntu containers, since FreeNAS is too heavy for my needs. For NFS, I just install nfs-kernel-server in the container, and use the exports file to manage shares. For sharing to Windows at home, I just use samba and make some shares with everything wide open. At work, I'm still using an Ubuntu container with samba, but someone else then uses Windows to manage the share permissions.@sno.cn I'm liking that container method! Going to try/play around with that... would be nice to have a container 'ready to go' with all the sharing/config etc. setup so you can image it and re-load it in a fly. Do you use any other management layer in the container for sharing out to windows, etc...? Could you use something like FreeNAS or Napp-IT within the container to just manage the share(s) ?
I'm getting away from my ESXI + Napp-IT all in one and will be needing to manage shares differently going forward, this proxmox idea is interesting and may save time/resources.
These are valid points. I'm not using any nested filesystems, only top level, and each, in most cases, is mounted to a separate container since they're for completely different sharing purposes. And then nothing has access to the root pool, so I'd never need to mount it anyway.I was going to do the container for sharing. What stopped me was a couple things combined.
1) Bind mounts do not allow you to traverse filesystems. So if you have a pool mounted at /raid and a bunch of ZFS filesystems under that, /raid/fs1, /raid/fs2, etc.. You can't bind /raid and access fs1, fs2, etc.. Annoying. I understand the reasons for it, but still annoying.
2) There was a limit of 10 bind mounts when I looked into it. 0-9. That could easily have changed by now. Combined with 1, that made my sharing stuff impossible to manage as I have a lot of filesystems. I like it that way so that I can manage the frequency and detail of backups and snapshots, as well as enabling compression for some, but not others.
If they have fixed 2 so you can have loads of bind mounts, a simple shell script can generate a big list of mount lines.
For now, I just back up smb.conf and /etc/exports.
I could have created network shares in the host, only for the sharing container, but then I have 2 sets of configuration to manage.
Perhaps the Turnkey File Server is what you are looking for?
- is there any LXC container with a graphical interface or do you have to do all of the sharing setup by command line?
I've done just that. After having zfs send/receive issues with FreeNAS 10 I backed out and installed Proxmox and setup ZFS as my destination. Currently using pve-zsync (thanks for the tutorials) and thinking about moving to znapzend.@Free_Norway you may think this is a crazy idea, but why not try simply using Proxmox for storage? That works very well.