FreeNAS performance to Proxmox ZFS


Active Member
Apr 18, 2015
I'm looking at 11.2-stable. I love the new UI but it's feeling really tired.

I'm considering installing Proxmox and using it as a ZFS file server. I've got a Xeon D-1541 and this is an 8x 3.5 8TB server. CPU and memory I've got lots of. My other nodes are Proxmox so I'm looking at FreeNAS and I'm like why not just make it a bigger Proxmox cluster with a ZFS node?

I keep some files in private network shares only accessible via a VLAN to Windows. I'm stuck with 1G between the NAS and Windows.

I've heard horror stories on ZFS performance. Can ZFS Proxmox fill 1G Network on sequential writes without a ZIL? FreeNAS can.


Staff member
Dec 21, 2010
1GbE is no issue. I have a HPE ML110 with 4x 10TB drives and no SSD cache right now and it can handle 1GbE out of the box without tuning.
  • Like
Reactions: T_Minus


Active Member
Jan 30, 2015
IIRC the only fly in the Proxmox ointment is that it expects all the nodes in a cluster to be identically configured, so you may not want to add your file server to your existing cluster.

Adding a ZFS pool itself is no issue though. I've ran Proxmox as my file server in addition to virtualization duties. Just configure the storage pool and samba server manually and GTG...


Oct 26, 2018
The latest update to Proxmox also adds GUI support for creating ZFS and other disk volumes (Celph, etc.). Not that it is particularly hard to do via the CLI but nicer from an overall management standpoint.

I've found performance to be in line with what others are reporting - also, I have a D-1541 board as well. I'm adding a SLOG (the AsRock board I have has dual 2x m.2 slots that are basically made for the Optane 32gb m.2 drives) - don't overlook that while your network might be 1G if you decide at some point to run VM's on this server and they will attach via NFS or iSCSI to the ZFS pool you'll probably want to have at least a SLOG or you'll see a performance impact there. For file sharing over the 1G it won't make a difference.


cat lover server enthusiast
Jul 7, 2016
I don't know about Proxmox+ZFS, but on a test machine i have running CentOS7+ZFS (I think 0.7.11?), 24x 3.5" 4TB SATA HDDs in a single raidz2 pool, booted with 32GB RAM, 1M record size, and running iozone like this:

iozone -s 8g -r 1m -i 0 -i 1 -i 2 -t 16 -j 64
These are the results I got (no tuning so far, other than recordsize=1M):

"Throughput report Y-axis is type of test X-axis is number of processes"

"Record size = 1024 kBytes "

"Output is in kBytes/sec"

" Initial write " 1633118.23

" Rewrite " 1784556.62

" Read " 1185868.28

" Re-read " 1184013.30

" Random read " 227690.38

" Random write " 1762040.65
So, that's about 1.6GB/s seq writes, 1.18GB/s seq read. random read is blah... Anyway, for sequential I/O, it looks adequate for 10GbE network =~ 1.2GB/s. And definitely not a problem for 1Gbps network.