NUC 13, 2 NVME, 1 SSD. Promox ZFS Raid 1 on NVME or ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

KC2020

New Member
Dec 29, 2023
4
0
1
I'm setting up a NUC 13 to learn Proxmox in my home lab. I've read conflicting opinions on the advantages and disadvantages of ZFS on consumer grade NVME drives, speed loss and heavy wear being the common issues I've read.

So I joined this forum to get some opinions about how I should proceed.

My NUC has a Seagate PCIE 4 1 TB NVME and 1 TB Transcend NVME that is PCIE 3 SATA, plus a 2 TB SSD, 64 GB of RAM and the i7-1360 with 12 cores and 16 threads.

I'd like to install a couple of different Linux bistros, a W11 VM and pfSense to learn my way around Proxmox.

So should I use ZFS RAID 1 for the NVMEs or RAID 0 on just one of them or skip ZFS altogether ?

Any and all information is appreciated !
 

NerdAshes

Active Member
Jan 6, 2024
101
49
28
Eastside of Westside Washington
With your mix of drives, I would skip ZFS. Not that I think it'll kill your drives (for at least 5 years - if they are brand new and have a decent TDW). I just don't think it's going to add enough (performance) value. You currently have interesting targets for the OS host, VMs, LXCs, backups, etc.. That should be fun to mess around with and provide more (educational) value without RAID eating up two of your drives and performing worse (CPU,RAM).

If it's not an educational toy, but instead it's going to be hosting services you'll actually rely on - I'd have (slightly) different opinions.
 

KC2020

New Member
Dec 29, 2023
4
0
1
Thanks, that's what I was thinking.

This is definitely just to learn my way around Proxmox and the NUC is what I have available. I'll put together something with the appropriate components to learn the ZFS advantages when I better understand what I'm doing with Proxmox.
 

KC2020

New Member
Dec 29, 2023
4
0
1
You might have even more fun installing Debian, partitioning up your drives and then slapping the Pmox on top that way. They have a walk through in their docs (which are great).
OK. I'll take a look.

The advantage of that approach is ?

Thanks !
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
So should I use ZFS RAID 1 for the NVMEs or RAID 0 on just one of them or skip ZFS altogether ?
ZFS has many advantges over older filesystems but they come with a price tag:

"Copy on Write" protects against a corrupt filesystem when the system crashes during a write operation. It also gives you snap versions without delay or initial space consumption.

CoW costs a little performance due higher fragmentation. On larger files like VMs the minimal io on changes are ZFS datablocks in recsize. You should not use large recsize like 1M but smaller values like 16k-64k for VM storage.

Checksums guarantee validity of data
but increases io to read/ write

ZFS has sophisticated rambased read/write caches to increase performance,
this needs some extra RAM (give ZFS on Linux at least 8-12GB RAM, on Solaris 4-8 GB)

Enable sync to protect all writes in the rambased writecache
costs write performance

In the end I would not miss these advantages and pay the price for them.

I would use the Transend for OS and other data, the Seagate NVMe for VMs and performance sensitive data (without Raid) and the SSD for daily or hourly backup syncs via zfs replication,
 

KC2020

New Member
Dec 29, 2023
4
0
1
ZFS has sophisticated rambased read/write caches to increase performance,
this needs some extra RAM (give ZFS on Linux at least 8-12GB RAM, on Solaris 4-8 GB)
OK. I understand how to allocate memory to the VMs. Is this what you mean ? Or are saying leave 8-12GB RAM available, not allocated to a VM ?

In the end I would not miss these advantages and pay the price for them.

I would use the Transend for OS and other data, the Seagate NVMe for VMs and performance sensitive data (without Raid) and the SSD for daily or hourly backup syncs via zfs replication,
So format each drive as ZFS RAID 0 and then follow the layout you describe ?

Thanks !
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
ZFS uses most available memory for read/write caching per default.
You must limit ZFS RAM usage if you want guaranteed RAM for VMs.
Google "limit zfs ram usage in proxmox"

No, no raid-0 (you need raid-0 for a pool with added capacity of 2+ disks)
Just create a pool with one disk as basic vdev