Hi, I’ve been using snap for at least over a year now without any problems and it was great.
Recently I decided to reinstall my system to ubuntu 20.04 but with ZFS as a file system. Since then I have weird problems with snap. After a restart I cannot launch any of my applications installed via...
I've setup a Proxmox-Hypervisor and got a total write amplification of around 18 from VM to NAND flash. Can someone hint me how to improve this?
Supermicro X10SRM-F, Xeon E5-2620v4, 64GB DDR4 2133 ECC
2x Intel S3700 100GB (sda/sdb, LBA=4k) mirrored with mdraid...
I have a ZoL system (0.7.9-1) with 3 pools:
VMdata: Main pool with zvols for VMs (proxmox), daily snapshots running here
Backup: Backup pool where snapshots get replicated (Sanoid)
Pool3: Other empty pool
There are a couple of zvol that I want to access in the entirety from a few days back...
I installed Beta 2 on one of my servers, it has dual E5-2620 v3 CPUs, and 128GiB of RAM. I have 8x 4TB rust disks in a pool, and two 2TB SSDs in a pool. I have identical speeds and performance as I did with ZOL on Debian Buster. My previous experience with TrueNAS Beta 1 was not so hot, I...
Haven't been here for a while.
Have a strange question , maybe?
I have a Proxmox 5.4 server setup on ZFS.
The setup is as follows:
A system is on 2 SSD in mirrored pool
And around 16 HDDs of various sizes in mirrored pools based on disks size. Like I have a pool with 2T disks. A...
Hi, I have several drives connected to a AsRock X470D4U motherboard (3 PCIe slots) using HP H220 HBA (MPT2BIOS-7.39.02.00, IT mode) and SuperMicro BPN-SAS-216A backplane. The drives are configured into a ZFS array in Ubuntu 19.10, and `lsblk` and `zpool status` both shows all the drives as...
After spending a lot of time reading valuable sources such as STH, I spent the last 3 months slowly incubating my new file server. I am unfortunately quite disappointed by the performance of the resulting system.
I apologize for the long post, but I will try to give all relevant info that...
I'm trying to identify a good method of conducting a 2 way sync between a ZFS (OmiOS) array and a remote windows owned hardware array. Both have nfs and samba shares. The arays are around 20TB in size with 16TB utilization. Please don't tell me to switch the hardware windows machine...
Hello and Happy New Year to Everyone,
I'm going to put my hands on supermicro hardware to make a virtualization server, and I have a question regarding proper using it for ZFS.
So first the hardware:
Supermicro | Products | Motherboards | Xeon® Boards | X9DRH-7TF
is it possible to see the real space usage of a file in ZFS when e.g. lz4 compression is enabled?
When I do a ls -l of a folder it looks like it shows the non-compressed size (or compression is completely ineffective for those files) though.
Anyone already hampered with that kind of stuff...
Is there a way to suppress this?
I mean so far as I see these (almost full rights for owner@, read_attr_set for group@ and everyone@) they will not really hamper with anything but with a bit of OCD that looks horrible in my neat designed group-based ACLs. Copying data onto the folders via...
I posted this on Dell's community as well but no responses as yet.
So I think I have a serious problem that I need any help I can get. I don't know if the H310, the SAS cables, the enclosure, or the drives are bad. I will explain.
I purchased a re-purposed R820 with an H310 and 10 1TB...
I just upgraded my server rack with a few more drives, now as 12TB per disk is a lot of space I wanted to check them thoroughly for errors before actually using them. On windows I know of several tools to do that but have no experience/idea for Solaris/OmniOS.
What do you usually do for...
Solution in post two. TL;DR: it was the compression (which is, of course, almost perfect on a file filled with zeros).
I'm trying to find a way to create duplicate zvols (e.g., from a gold VM) without using cloning, which would cause a dependency issue (I'd rather be able to delete the parent...
So I have taken the plunge and am building a Linux lab server around a E2100 Xeon. I have six 8TB SATA drives that I plan to put into a ZFS RaidZ2 pool. I am considering adding some L2ARC and ZIL/SLOG cache to increase IOPS and I have a 240GB M2 SSD (Corsair MP510) for that. To complete the...
Cockpit ZFS Manager
An interactive ZFS on Linux admin package for Cockpit.
Cockpit ZFS Manager is currently pre-release software. Use at your own risk!
Samba: 4+ (Optional)
0.3.3.404 Now Available (09-08-2020)...
On my IBM x3650 M3 server (model 7945K3G) I added two Kingston SSD A1000 M2 with 960 Gb of capacity (the exact model is SA1000M8/960G A1000) mounted into two M.2 PCIe adapter (https://www.amazon.it/gp/product/B07CBJ6RH7/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1).
I'm using Proxmox...
Retooling my backup solution to have more space and failover potential.
I have 2 1U servers left over from an older project both identical with 4 3.5" bays. I am wanting to have a somewhat large network backup target. Mainly for accessible archived storage for potential later use. I have a few...