How many here run ZFS on Linux and get good fsync performance?

Discussion in 'Linux Admins, Storage and Virtualization' started by BackupProphet, Aug 18, 2018.

  1. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    720
    Likes Received:
    253
    I just installed an Optane 900p as SLOG on my SSD pool. It made zero difference. I still only get 1/4 of the iops I get from ZFS on FreeBSD with the 32GB Optane. Before I reinstall and test again I would like to know if everyone else also struggle with slow fsync performance with their ZOL installation? I am on Ubuntu 18.04 with ZOL 0.7.9
     
    #1
    Monoman likes this.
  2. Monoman

    Monoman Active Member

    Joined:
    Oct 16, 2013
    Messages:
    261
    Likes Received:
    64
    Not to be difficult, but if you're testing solutions like this, I'd try proxmox and then also centos for ZOL performance . I've read many times about poor Ubuntu zfs performance. Could be strange vatience.
     
    #2
    arglebargle likes this.
  3. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    720
    Likes Received:
    253
    I took your suggestion and did some research this weekend. Conclusion ZoL has much better performance on CentOS, but is still slow.

    For comparison, same hardware, default settings ZoL 0.7.9, benchmarked with pg_test_fsync

    Ubuntu 2200 iops
    Debian 2000 iops
    CentOS 8000 iops

    FreeBSD 16000 iops

    Ubuntu + XFS 34000 iops
    Ubuntu + EXT4 32000 iops
    Ubuntu + BcacheFS 14000 iops
     
    #3
    MikeWebb, T_Minus and Monoman like this.
  4. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,429
    Likes Received:
    327
    That's a huge difference :eek:
    Has it something to do with Debian & Centos using "older" kernel versions?
     
    #4
  5. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    720
    Likes Received:
    253
    Debian was 4.5ish, while CentOS was 3.10ish, Ubuntu is 4.16
     
    #5
  6. Monoman

    Monoman Active Member

    Joined:
    Oct 16, 2013
    Messages:
    261
    Likes Received:
    64
    Would it be possible to redo your test with a Solaris install? Be it Oracle or one of the free versions.
     
    #6
  7. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    2,484
    Likes Received:
    440
    wtf - why is there so much performance difference ... this does not bode well. I mean CoW vs. XFS is apples and oranges but still.
     
    #7
  8. amalurk

    amalurk Member

    Joined:
    Dec 16, 2016
    Messages:
    95
    Likes Received:
    11
    Wow. You should post this to appropriate Debian and Ubuntu lists, maybe someone on the inside will take notice and investigate. Sure seems like there must be some optimizations that could be easily done.
     
    #8
  9. MikeWebb

    MikeWebb Member

    Joined:
    Jan 28, 2018
    Messages:
    69
    Likes Received:
    16
    Gosh. Good find. PVE must limp on ZFS by comparison. I can see why RHEL uses XFS. I’ve only just started looking at ZFS tuning for optane and 40g rdma. Looks like there is lots of gains to be made. I got as far as Dataset recordsize and zvol volblocksize, alignment shift then the red wine took over.
     
    #9
    Evan and gigatexal like this.
  10. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    851
    Likes Received:
    89
    Is the difference also present when you compare fio runs?
     
    #10
  11. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    720
    Likes Received:
    253
  12. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    851
    Likes Received:
    89
    I have a Setup with current proxmox and a 900p aic i could benchmark tomorrow - what ssd‘s do you have in the pool and how is the Layout ?
     
    #12
    gigatexal likes this.
  13. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    720
    Likes Received:
    253
    Intel S3700 and Micron 500DC (with latest firmware), but only the slog get benched when doing sync writes.
     
    #13
  14. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    851
    Likes Received:
    89
    Ok, will also use some 500dc or even a pair husmm1680 just to put something reasonable fast behind the slog ... Not sure its only slog that affects this, as it‘s cow...
     
    #14
  15. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    720
    Likes Received:
    253
    With 2000 iops at 8kb sync writes, even the S3700 or the Micron 500DC is not up to their full potential. I get 7000-8000 iops with S3700 and 9000-10000 iops with Micron 500DC on ext4.
     
    #15
  16. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    851
    Likes Received:
    89
    Yes, i know the microns can go surprisingly close to 40k sustained 4k iops with fio.

    No idea of the pg benchmark numbers, and how they relate - besides it‘s 8k blocks
     
    #16
  17. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,404
    Likes Received:
    1,313
    Any of ya tested optane in proxmox as storage not SLOG? Was that crippled too?
     
    #17
  18. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    2,484
    Likes Received:
    440
    Is there ZFS on Linux mailing list or something. They should know about this. It shouldn’t be this hard to get super high end drives to perform well.

    I did have an idea. To rule out ZFS I think there’s a way to limit if not turn off the CoW nature of it right? I can’t tell where I might have seen that flag. I guess testing the drive as a raw block device and then testing it in say XFS and then in stock or tweaked ZFS is the same idea. I just hate seeing ZFS so slow.
     
    #18
  19. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    851
    Likes Received:
    89
    Haven‘t managed to do anything today, too much day-to-day business :(

    I have used the 900p as primary storage with pve, and also via nvmeof directly into vm (sr-iov into vm, initiator on the vf). Performance was always ,on the point‘

    Edit: to get the most out of this a recent kernel and blk-mq enabled + the right io-schedulers is absolutely a ,must‘
     
    #19
    Last edited: Sep 17, 2018
    T_Minus likes this.
  20. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    2,484
    Likes Received:
    440
    Wait you can carve up a 900p with sriov like you might an Ethernet interface?
     
    #20
Similar Threads: many here
Forum Title Date
Linux Admins, Storage and Virtualization Advice needed: Proxmox vs vSphere Oct 15, 2018
Linux Admins, Storage and Virtualization Where to download Ubuntu 18.04 LTS now that it is available Apr 26, 2018
Linux Admins, Storage and Virtualization Where/how do you install Proxmox & more Feb 23, 2017
Linux Admins, Storage and Virtualization Ubuntu and vsphere deploy from template hell Jan 4, 2017
Linux Admins, Storage and Virtualization Any docker users out there? (MTU change) May 16, 2016

Share This Page