1. G

    Snaps are broken after restarting ubuntu 20.04 on zfs

    Hi, I’ve been using snap for at least over a year now without any problems and it was great. Recently I decided to reinstall my system to ubuntu 20.04 but with ZFS as a file system. Since then I have weird problems with snap. After a restart I cannot launch any of my applications installed via...
  2. D

    Optimize Write Amplification (ZoL, Proxmox)

    Hi, I've setup a Proxmox-Hypervisor and got a total write amplification of around 18 from VM to NAND flash. Can someone hint me how to improve this? Server setup: Supermicro X10SRM-F, Xeon E5-2620v4, 64GB DDR4 2133 ECC root drives: 2x Intel S3700 100GB (sda/sdb, LBA=4k) mirrored with mdraid...
  3. J

    quickest method to clone zfs snapshot to a new zvol?

    I have a ZoL system (0.7.9-1) with 3 pools: VMdata: Main pool with zvols for VMs (proxmox), daily snapshots running here Backup: Backup pool where snapshots get replicated (Sanoid) Pool3: Other empty pool There are a couple of zvol that I want to access in the entirety from a few days back...
  4. sboesch

    TrueNAS Beta 2 is working great!

    I installed Beta 2 on one of my servers, it has dual E5-2620 v3 CPUs, and 128GiB of RAM. I have 8x 4TB rust disks in a pool, and two 2TB SSDs in a pool. I have identical speeds and performance as I did with ZOL on Debian Buster. My previous experience with TrueNAS Beta 1 was not so hot, I...
  5. J

    Why mount zfs dataset?

    The Oracle Docs illustrate mounting a zfs dataset under /export/dataset.... I might be missing something, but, why mount it? If you're doing an NFS export, why not just export /pool/dataset?
  6. gb00s

    iXsystems next Open Source Project - TrueNAS SCALE >> Debian Linux

    Link: Starting our next Open Source Project - TrueNAS SCALE
  7. vl1969

    [solved]Zfs on Linux Bootable rpool setup(Proxmox 5.3)

    Hi everyone, Haven't been here for a while. Have a strange question , maybe? I have a Proxmox 5.4 server setup on ZFS. The setup is as follows: A system is on 2 SSD in mirrored pool And around 16 HDDs of various sizes in mirrored pools based on disks size. Like I have a pool with 2T disks. A...
  8. N

    Drives Missing When HBA is Moved To Another PCIe Slot

    Hi, I have several drives connected to a AsRock X470D4U motherboard (3 PCIe slots) using HP H220 HBA (MPT2BIOS-, IT mode) and SuperMicro BPN-SAS-216A backplane. The drives are configured into a ZFS array in Ubuntu 19.10, and `lsblk` and `zpool status` both shows all the drives as...
  9. W

    Disappointing ZFS read performance on 2 x 6 RaidZ2 and quest for bottleneck(s)

    Hi! After spending a lot of time reading valuable sources such as STH, I spent the last 3 months slowly incubating my new file server. I am unfortunately quite disappointed by the performance of the resulting system. I apologize for the long post, but I will try to give all relevant info that...
  10. 3

    Off site 2 way sync, dissimilar OS

    Hi guys, I'm trying to identify a good method of conducting a 2 way sync between a ZFS (OmiOS) array and a remote windows owned hardware array. Both have nfs and samba shares. The arays are around 20TB in size with 16TB utilization. Please don't tell me to switch the hardware windows machine...
  11. Z

    Supermicro SAS backplate, LSA 2208 and working with ZFS

    Hello and Happy New Year to Everyone, I'm going to put my hands on supermicro hardware to make a virtualization server, and I have a question regarding proper using it for ZFS. So first the hardware: motherboard: X9DRH-7TF Supermicro | Products | Motherboards | Xeon® Boards | X9DRH-7TF it...
  12. D

    Check efficiency of lz4 compression in ZFS

    Hi, is it possible to see the real space usage of a file in ZFS when e.g. lz4 compression is enabled? When I do a ls -l of a folder it looks like it shows the non-compressed size (or compression is completely ineffective for those files) though. Anyone already hampered with that kind of stuff...
  13. D

    rsync or cp creates trivial ACLs on the destination

    Is there a way to suppress this? I mean so far as I see these (almost full rights for owner@, read_attr_set for group@ and everyone@) they will not really hamper with anything but with a bit of OCD that looks horrible in my neat designed group-based ACLs. Copying data onto the folders via...
  14. Q

    R820 / H310 "Adapter at Baseport is not responding"

    I posted this on Dell's community as well but no responses as yet. So I think I have a serious problem that I need any help I can get. I don't know if the H310, the SAS cables, the enclosure, or the drives are bad. I will explain. I purchased a re-purposed R820 with an H310 and 10 1TB...
  15. D

    Checking complete disks for errors in OmniOS

    Hi, I just upgraded my server rack with a few more drives, now as 12TB per disk is a lot of space I wanted to check them thoroughly for errors before actually using them. On windows I know of several tools to do that but have no experience/idea for Solaris/OmniOS. What do you usually do for...
  16. A

    SOLVED: duplicated zpool via send/recv, disk usage appears to be incorrect

    Solution in post two. TL;DR: it was the compression (which is, of course, almost perfect on a file filled with zeros). I'm trying to find a way to create duplicate zvols (e.g., from a gold VM) without using cloning, which would cause a dependency issue (I'd rather be able to delete the parent...
  17. L

    Cache plans for ZFS server

    So I have taken the plunge and am building a Linux lab server around a E2100 Xeon. I have six 8TB SATA drives that I plan to put into a ZFS RaidZ2 pool. I am considering adding some L2ARC and ZIL/SLOG cache to increase IOPS and I have a 240GB M2 SSD (Corsair MP510) for that. To complete the...
  18. optimans

    Cockpit ZFS Manager [ Now Available]

    Cockpit ZFS Manager An interactive ZFS on Linux admin package for Cockpit. WARNING! Cockpit ZFS Manager is currently pre-release software. Use at your own risk! Requirements Cockpit: 201+ NFS (Optional) Samba: 4+ (Optional) ZFS: 0.8+ Now Available (09-08-2020)...
  19. D

    NVMe performance on IBM x3650 M3

    Hi, On my IBM x3650 M3 server (model 7945K3G) I added two Kingston SSD A1000 M2 with 960 Gb of capacity (the exact model is SA1000M8/960G A1000) mounted into two M.2 PCIe adapter (https://www.amazon.it/gp/product/B07CBJ6RH7/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1). I'm using Proxmox...
  20. A

    Sanity Check

    Retooling my backup solution to have more space and failover potential. I have 2 1U servers left over from an older project both identical with 4 3.5" bays. I am wanting to have a somewhat large network backup target. Mainly for accessible archived storage for potential later use. I have a few...