1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Proxmox VE Build Questions

Discussion in 'Linux Admins, Storage and Virtualization' started by Free_Norway, Feb 9, 2017.

  1. Free_Norway

    Free_Norway New Member

    Joined:
    Feb 9, 2017
    Messages:
    5
    Likes Received:
    1
    Hi
    I'am new to the forum and hope to find some answers/help

    For years i have used various Windows version with a hw raid controller as my home server/nas.

    I have now played a little bit with Proxmox and i really like it->having tried most Linux/FreeBSD NAS OS'es
    The last setup i played with was:
    • Proxmox on a single new 850 Pro SSD, ZFS
    • Windows Guest with GPU PCIE and USB passthrough->used as media player/server(high res bluray rips/4k..)
    What i have got:
    • 8x 4TB SATA disks for storage->5 of them are use right now in my Windows HW raid Raid 6 setup
    • 1x 850Pro and 1x 850Evo SSD 256GB
    • 1x 2TB + 1x 1,5TB + 1TB older SATA disks
    and what i want to achieve:
    • Proxmox on mirrored SSD zpool
    • 8x 4TB drives in ZFS Raid Z1/Z2 ->first migrate data from HW Raid to another server
    • 2-5 VM's for Windows/NAS OS/pfsense/other OS experiments
    My biggest problem is the lack of experience with CLI administration of ZFS.
    I would like a easy to use ZFS GUI tool(coming from Windows).
    Was thinking about Webmin ontop of Proxmox, but most threads i have found argue that its not the best solution.

    Any ideas or advise?
     
    #1
    Last edited: Feb 9, 2017
    vhypervisor likes this.
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    8,383
    Likes Received:
    2,264
    Hey @Free_Norway mind if I move this to its own thread?
     
    #2
  3. Free_Norway

    Free_Norway New Member

    Joined:
    Feb 9, 2017
    Messages:
    5
    Likes Received:
    1
    Fine with me
     
    #3
  4. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    8,383
    Likes Received:
    2,264
    @Free_Norway we are still using a Proxmox cluster for web hosting. It works very well. We are using both ZFS mirrors and a small Ceph cluster.

    Here is a guide to the Proxmox ZFS CLI side. It is basically standard ZFS commands so it is very simple to use (and look something up if there is an issue.)
    Add a mirrored zpool to Proxmox VE

    Getting a bit fancier, you can setup a ZFS sync to an offsite location using this: Automating Proxmox ZFS backups with pve-zsync

    If you look up the commands you need, it should be a 5-minute process at most to get it working with your Proxmox host. Once it is being used by your Proxmox host you are going to use mostly the web interface.
     
    #4
    niekbergboer and T_Minus like this.
  5. Free_Norway

    Free_Norway New Member

    Joined:
    Feb 9, 2017
    Messages:
    5
    Likes Received:
    1
    Hi Patrick

    Thanks for the reply.

    I have used that guide when i played around with Proxmox and a ZFS Raidz pool with 4 drives. I got it working without any hickups, but iam not shure i would know what to do if something would happen.
    Does Proxmox show status information about the ZFS pool?
     
    #5
  6. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    8,383
    Likes Received:
    2,264
    Proxmox can send you alerts when something goes wrong. For example, when I had a Ceph node overheat and fall out of the array, I got this beauty of an e-mail:
    Code:
    /etc/cron.daily/logrotate:
    cat: /var/run/ceph/osd.6.pid: No such file or directory
    cat: /var/run/ceph/osd.8.pid: No such file or directory
    From there, I was able to find what was the issue.

    My advice, setup a test VM and practice failing virtual disks. My sense is that within an hour or two you will feel comfortable troubleshooting ZFS. Proxmox ZoL commands are very easy to research via Google.
     
    #6
    Last edited: Feb 9, 2017
  7. Free_Norway

    Free_Norway New Member

    Joined:
    Feb 9, 2017
    Messages:
    5
    Likes Received:
    1
    Thanks for all the feedback
     
    #7
  8. sno.cn

    sno.cn Member

    Joined:
    Sep 23, 2016
    Messages:
    69
    Likes Received:
    19
    Managing ZFS with the CLI is super easy, and it does exactly what you tell it to the first time, which may or may not be the case with a graphical utility. In the past, whenever I've used a GUI to manage ZFS, I've gone in afterwards with the CLI anyway to make sure my pool was configured correctly.

    Super easy to get ZFS going:
    • Install and configure Proxmox. If you want to install Proxmox on ZFS, just use the built-in wizard.
    • Follow this guide to add a ZFS pool.
    • That guide is for a mirror. If you want some other configuration, it'll be super easy to find with Google.
    • Set compression on your new pool with 'zpool set compression=lz4 myZfsPool'
    • Create a filesystem on your new pool if you want to. Mine are like 'zfs create myZfsPool/iso' or 'zfs create myZfsPool/media' or 'zfs create myZfsPool/backup' or whatever.
    • Change the mountpoint for your zfs filesystem. Most of the guides tell you to mount to /mnt and then bind that to /export, but I just mount to /export. I mean you can also mount it to /boobies if you want, it doesn't really matter. So if you made the filesystem iso in your zfs pool, you could use 'zfs set mountpoint=/export/iso myZfsPool/iso' to mount your iso filesystem to /export directory.
    • Follow the guide I linked above if you need Proxmox to use your zfs pool. Or just read the Proxmox wiki.
    • Make an LXC container, bind your mounted ZFS filesystem to it, and then use the container to share it on your network. This way you can isolate permissions, and easily reset if you fuck something up.
    Just get in there and mess around. Follow the guides, mess around some more, and learn, and then do it correctly. You'll figure it out. With your 8 x HDD setup, I would rather use RAID 10 (striped across 4 mirrors), instead of any raidz configuration. There are arguments on either side of this, but I don't run parity RAID on anything, at home or in production environments, but again that's just my preference.

    Depending on your use case, you may or may not need to add slog and/or cache. I almost always don't use either.
     
    #8
    T_Minus likes this.
  9. Free_Norway

    Free_Norway New Member

    Joined:
    Feb 9, 2017
    Messages:
    5
    Likes Received:
    1
    Thanks for the reply sno.cn

    some questions:
    • what does the "bind to /export" do?
    • when google zfs topics, is it important that is about ZoL or wil FreBSD/Oracle info be the same?
    • why would you prefer to use a LXC container over a e.g. KVM virtualization
      I ask because I have never used LXC
    • I know there is a lot of discussion about Raid1/10 vs Raidz, but my idea was to have most space possible with some protection against drive failure. Things that will be stored there are replaceable(BD rips/CD rips/RAW pictures...)
      Important data is store at least on 1 or 2 other media/places
      The server will be idle/low demand for the most part of the day(when we are at work/sleeping), so the disks are not strained
    • How important is the optimal amount of disks for Raidz?
      ->I think I found the answer to this here
      How I Learned to Stop Worrying and Love RAIDZ | Delphix

    Thanks for all the input
     
    #9
    Last edited: Feb 10, 2017
  10. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    392
    Likes Received:
    96
    I think the bind mention is to bind-mount the filesystem into the container. It makes it look like a normal local filesystem inside the container, which is nice.

    LXC and other containers are more efficient as they don't need to virtualize the whole system. They share the kernel from the host, it's a bit like a really secure chroot, BSD Jail, or Solaris Zone. The other nice thing is that you don't have to push everything through the network stack, you can use tricks like the bind mounts instead. Some things don't work so well in containers, so KVM is nice to have for those. I use containers on my Proxmox host whenever possible just to keep overhead down. They also start up faster as they don't have to boot a kernel, probe hardware, etc..

    All the ZFS systems share commands. There are a few things that work less well on Linux than Solaris though. Stuff like the share settings to create network accessible shares tend to not work as well. The core stuff like creating pools/filesystems, scrubs, send/recv, permissions, all work the same. My preference is one large pool, with filesystems for each type of data. Big stuff like ISO images, personal files, backup data, etc all have their own filesystem. This lets me set things like compression to match the data type. No point in having compression enabled on movies, but it does help on documents. It also helps when doing things like send/recv. I can do it more frequently for documents, where the near unchanging stuff can be done weekly or similar.

    Array type is all about tradeoffs. You use less space for redundancy with raidz, but expansion is more involved and replacing drives is slower. Performance is also lower, but that might not matter for your needs. I prefer mirrors as with 10Gb networking the performance difference is dramatic. Particularly for random I/O, which is almost all I/O on a server as there are other clients, background processes, VMs/Containers, etc.. I also like that I can expand the array by adding or replacing 2 drives, rather than however many I used for raidz. And repairs are significantly faster, replacing a failing drive takes an hour or two, vs a day or so for larger raidz drives. But those are my reasons and needs, yours may well be different. Whatever type you use, backup the important data. :)
     
    #10
    sno.cn and T_Minus like this.
Similar Threads: Proxmox Build
Forum Title Date
Linux Admins, Storage and Virtualization Tips for building proxmox servers Jan 18, 2017
Linux Admins, Storage and Virtualization How would you setup this Proxmox storage? Feb 15, 2017
Linux Admins, Storage and Virtualization Proxmox DRS "like" Feb 10, 2017
Linux Admins, Storage and Virtualization My favorite thread so far in the proxmox forums Jan 9, 2017
Linux Admins, Storage and Virtualization Proxmox VE docs to upgrade Ceph to Jewel Jan 5, 2017

Share This Page