Proxmox / ZFS Storage Configuration

Discussion in 'Linux Admins, Storage and Virtualization' started by 11Blade, Jan 27, 2018.

  1. 11Blade

    11Blade Member

    Joined:
    Aug 8, 2016
    Messages:
    30
    Likes Received:
    5
    Several questions on optimizing a large Proxmox ZFS system

    VM's
    1. Proxmox host
    2. Ubuntu/Debian MySQL vm - DB size about 1TB
    3. General purpose ubuntu

    No media serving or streaming requirements, just a metric crap-ton of small dicom/jpeg2000 files.

    Hardware
    SM X10Dri / 64Gb ram / 2x E5-v3 cpu - SM836 case
    64Gb Satadom x 2 (Proxmox boot drive)
    3 x PCIe Samsung PM953 nvme m.2 drives (on separate cards)
    16 * 10tb HGST drives (8x 4k drives, 8x 512e)

    Dataset size is about 80Tb

    I used various calculators and zfs guides and arrived at 2 RAID-Z1 vdevs of 8 disks each
    will give me about 120Tb usable. I definitely lose on reliability but I don't see a better way
    to retain capacity. RAID-Z2 - nets me 100tb usable.

    1. - Having 8 512 + 8 4k drives is giving me some pause. Should I use all 4k disks in one group and all the 512 disks in another group. or 4 each in each vdev?

    2. - I can't fit or afford any more drives. Can I achieve any better redundancy without sacrificing storage in a different multi vdev config ?

    3. - I intended to use 2 PM953 (batt backup) mirrored as a ZLOG and 1 PM953 as an L2arc. (I haven't even tried it yet) - various guides mention that large L2arc's may impact performance. Is the mirrored zlog nvme drives even doable for a ZLOG.

    4. - with such large storage, I am thinking of taking the RAM up to 128 or 192Gb, since various sources say that there is some RAM overhead for ZFS. I have seen contradictory comments here at STH but I feel that 64Gb may come up short.

    Thanks

    11Blade
     
    #1
  2. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,552
    Likes Received:
    442
    General consensus is don't mix your disks, while 512 should not suffer any performance hit, the 4k might. Having both in the same pool will probably degrade the whole pool a smidge. I would not even consider Raidz1 for something this size, unless I had mirrored hardware, a minimum of Raidz2 and a good backup strategy would be my choice here. Max out your RAM first and run the numbers once you have a baseline to work from. If you need to add ZIL and L2ARC at that point, then you will have a better idea of any performance/ cost benefit. Any SLOG that you add must be super low latency and have end to end PLP :)
     
    #2
  3. EluRex

    EluRex Active Member

    Joined:
    Apr 28, 2015
    Messages:
    209
    Likes Received:
    70
    My Suggestion as follow
    1. using 8 hdd 512 to create a raidz1 stripe (raid50) as one pool and 8hdd 4k to create raidz1 stripe (raid50) as another pool for Database only. make sure zfs set blocksize-4k for that pool
    2. above configuration allow you to have max of 4 drive failures out of your 16 drive configuration
    3. pm953 are not fast enough and it actually drag the system slower. Your DB 4k pool can benefit from a much fast nvme.ssd like sm961. For media file serving... ZiL does not benefit performance significantly.
    4. more RAM the better the merrier for ZFS. Please note that ZFS will release L1ARC if system request to use more Memory.
    Happy ZFSing and PVEing.
     
    #3
Similar Threads: Proxmox Storage
Forum Title Date
Linux Admins, Storage and Virtualization Proxmox Host with Non-ECC memory and a Freenas Shared Storage Server with ECC Memory Feb 4, 2020
Linux Admins, Storage and Virtualization Proxmox server with separate unraid storage box, is it possible? Aug 19, 2019
Linux Admins, Storage and Virtualization Proxmox Blade Cluster storage options Aug 20, 2017
Linux Admins, Storage and Virtualization Proxmox storage setup Jun 5, 2017
Linux Admins, Storage and Virtualization Proxmox VE and "simple" storage May 13, 2017

Share This Page