Several questions on optimizing a large Proxmox ZFS system
VM's
1. Proxmox host
2. Ubuntu/Debian MySQL vm - DB size about 1TB
3. General purpose ubuntu
No media serving or streaming requirements, just a metric crap-ton of small dicom/jpeg2000 files.
Hardware
SM X10Dri / 64Gb ram / 2x E5-v3 cpu - SM836 case
64Gb Satadom x 2 (Proxmox boot drive)
3 x PCIe Samsung PM953 nvme m.2 drives (on separate cards)
16 * 10tb HGST drives (8x 4k drives, 8x 512e)
Dataset size is about 80Tb
I used various calculators and zfs guides and arrived at 2 RAID-Z1 vdevs of 8 disks each
will give me about 120Tb usable. I definitely lose on reliability but I don't see a better way
to retain capacity. RAID-Z2 - nets me 100tb usable.
1. - Having 8 512 + 8 4k drives is giving me some pause. Should I use all 4k disks in one group and all the 512 disks in another group. or 4 each in each vdev?
2. - I can't fit or afford any more drives. Can I achieve any better redundancy without sacrificing storage in a different multi vdev config ?
3. - I intended to use 2 PM953 (batt backup) mirrored as a ZLOG and 1 PM953 as an L2arc. (I haven't even tried it yet) - various guides mention that large L2arc's may impact performance. Is the mirrored zlog nvme drives even doable for a ZLOG.
4. - with such large storage, I am thinking of taking the RAM up to 128 or 192Gb, since various sources say that there is some RAM overhead for ZFS. I have seen contradictory comments here at STH but I feel that 64Gb may come up short.
Thanks
11Blade
VM's
1. Proxmox host
2. Ubuntu/Debian MySQL vm - DB size about 1TB
3. General purpose ubuntu
No media serving or streaming requirements, just a metric crap-ton of small dicom/jpeg2000 files.
Hardware
SM X10Dri / 64Gb ram / 2x E5-v3 cpu - SM836 case
64Gb Satadom x 2 (Proxmox boot drive)
3 x PCIe Samsung PM953 nvme m.2 drives (on separate cards)
16 * 10tb HGST drives (8x 4k drives, 8x 512e)
Dataset size is about 80Tb
I used various calculators and zfs guides and arrived at 2 RAID-Z1 vdevs of 8 disks each
will give me about 120Tb usable. I definitely lose on reliability but I don't see a better way
to retain capacity. RAID-Z2 - nets me 100tb usable.
1. - Having 8 512 + 8 4k drives is giving me some pause. Should I use all 4k disks in one group and all the 512 disks in another group. or 4 each in each vdev?
2. - I can't fit or afford any more drives. Can I achieve any better redundancy without sacrificing storage in a different multi vdev config ?
3. - I intended to use 2 PM953 (batt backup) mirrored as a ZLOG and 1 PM953 as an L2arc. (I haven't even tried it yet) - various guides mention that large L2arc's may impact performance. Is the mirrored zlog nvme drives even doable for a ZLOG.
4. - with such large storage, I am thinking of taking the RAM up to 128 or 192Gb, since various sources say that there is some RAM overhead for ZFS. I have seen contradictory comments here at STH but I feel that 64Gb may come up short.
Thanks
11Blade