ZFS write perf - not equal to iostat

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Grohnheit

New Member
Feb 20, 2016
2
0
1
34
Hi.

This is my first proper forum post. I just want to thank you all, I have been reading a lot on these forums and getting a lot of help. I hope you can help me with this issue, so I can get a little more speed out of my new server :)

I have just setup a new media server / lab server for my home.
This is my first Proxmox / ZFS build, so I am very green, so please forgive me if I am asking a rookie question.
I did spend a lot of time reading online before I went with this setup, and it have paid of so far :)

Quick specs of new server.
Supermicro H11SSL-NC
EPYC 7351p
64GB ECC RAM
LSI9201 HBA for HDDs
2 x Samsung 850 Evo 500GB SSDs for boot and VM OS.
1 x Intel Optane 900p 280GB for SLOG and fun (or l2arc if it makes sense at some point)
8 x Seagate archive 8TB super slow SMR drives. (When I am done migrating data, I will add 8 more, witch will double the speed, I hope)

The HDDs are running in a raidz2. I made a 30GB partition on the optane, and are using it for (S)LOG.

I am running a single Windows Server VM on top of all this, and this will function as my DC and file server.

I have started migration of 25TB data, and the performance is around 25-30 MB/s according to Windows and Proxmox. (both network and disk io)

But when I look at iostat from the Proxmox / ZFS server, I see bandwidth writes around 80 MB/s when sync=standard, and around 60 MB/s when sync=disabled.

I am using the virtio drivers for both scsi and nics.
in Proxmox the disk is using the default cache setting.
I tried copying some data to the C drive (the SSDs), and performance here is much better of cource, but I see the same thing. The Windows copy shows 70 MB/s, and iostat reports around 220 MB/s.

So did i skrew up some where? or is iostat just reporting false numbers? (I use zpool iostat 60)
The files I am copying are big media files.

I used ashift=12 when i created the pool.

my config:
root@host01:~# zfs get all
NAME PROPERTY VALUE SOURCE
data01 type filesystem -
data01 creation Sat Apr 28 9:56 2018 -
data01 used 30.4T -
data01 available 9.56T -
data01 referenced 205K -
data01 compressratio 1.00x -
data01 mounted yes -
data01 quota none default
data01 reservation none default
data01 recordsize 1M local
data01 mountpoint /data01 default
data01 sharenfs off default
data01 checksum on default
data01 compression on local
data01 atime off local
data01 devices on default
data01 exec on default
data01 setuid on default
data01 readonly off default
data01 zoned off default
data01 snapdir hidden default
data01 aclinherit restricted default
data01 createtxg 1 -
data01 canmount on default
data01 xattr sa local
data01 copies 1 default
data01 version 5 -
data01 utf8only off -
data01 normalization none -
data01 casesensitivity sensitive -
data01 vscan off default
data01 nbmand off default
data01 sharesmb off default
data01 refquota none default
data01 refreservation none default
data01 guid 3930392171083245452 -
data01 primarycache all default
data01 secondarycache all default
data01 usedbysnapshots 0B -
data01 usedbydataset 205K -
data01 usedbychildren 30.4T -
data01 usedbyrefreservation 0B -
data01 logbias latency local
data01 dedup off default
data01 mlslabel none default
data01 sync standard local
data01 dnodesize legacy default
data01 refcompressratio 1.00x -
data01 written 205K -
data01 logicalused 1.46T -
data01 logicalreferenced 40K -
data01 volmode default default
data01 filesystem_limit none default
data01 snapshot_limit none default
data01 filesystem_count none default
data01 snapshot_count none default
data01 snapdev hidden default
data01 acltype off default
data01 context none default
data01 fscontext none default
data01 defcontext none default
data01 rootcontext none default
data01 relatime off default
data01 redundant_metadata all default
data01 overlay off default
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
One obvious thing I can see is that you have SMR drives. That's the one hole in see in an otherwise awesome system. I have had problems with SMR drives in zfs as well as h/W raid. I doubt that you will be able tk get consistent performance with them in ZFS.
 

Grohnheit

New Member
Feb 20, 2016
2
0
1
34
Hi K D
Yea, I know, I have been using these disk in h/w raid in my old server. (And that is the only reason why I bought 8 more)
So I know they will be slow, and inconsistent.

Its more the big difference between real performance and iostat that I am worried about, and thats happens on both my SSDs and HDDs.
If we look at the SSDs, since they are more consistent beyond the first gig, iostat reports 220 MB/s witch seems very realistisk on these disks. The only problem is, that while iostat reports 220 MB/s, I am only seeing 70 MB/s in my guest, and in Proxmox.
That makes me believe that I skrewed up somewhere, unless iostat is reporting false numbers.