1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

ZFS performance vs RAM, AiO vs barebone, HD vs SSD/NVMe, ZeusRAM Slog vs NVMe/Optane

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by gea, Dec 6, 2017.

  1. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,580
    Likes Received:
    502
    I have extended my benchmarks to answer some basic questions

    - How good is a AiO system compared to a barebone storage server
    - Effect of RAM for ZFS performance (random/ sequential, read/write)
    (2/4/8/16/24G RAM)
    - Scaling of ZFS over vdevs
    - Difference between HD vs SSD vs NVMe vs Optane
    - Slog SSd vs ZeusRAM vs NVMe vs Optane

    current state:
    http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
     
    #1
    azev, J-san, poto and 2 others like this.
  2. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    689
    Likes Received:
    63
    what exactly does the benchmark you call from the GUI?
    any chance this can be reproduced on zol to compare?
     
    #2
  3. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,580
    Likes Received:
    502
    My menu Pools > Benchmarks (this one is in 17.07dev) is a simple Perl script. In the current benchmark set it uses some Filebench workloads for random, sequential and mixed r/w workloads. The other options are dd and a simple write loop of 8k or larger writes via echo. The script executes the benchmarks one by one switching sync on writes automatically and allows modification of some settings directly or via shellscript (for ZFS tuning) to reduce the stupid manual switching of settings between large benchmarks series as each run consists of 7 benchmarks (write random, write sequential, both sync and async, read random, r/w and read sequential). For the many benchmarls to be run I selected benchmarks with a quite proper result but ones with a short runtime. Therefor from run to run the differences are at about 10% but this should not affect the general result.

    So it should work on Zol. But I would expect similar results on ZoL, maybe you need some extra RAM for same values.
     
    #3
    Last edited: Dec 6, 2017
  4. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    689
    Likes Received:
    63
    ok, will setup an aio on proxmox and have a look into it.
    i just would like to have a point to compare zol / some idea what numbers to expect with the optane but usually use fio what doesn't compare good with your numbers.
    i want to try what happens if i export a partition of the optane via NVMEoF and use it as slog on the initiator for some local hdd.

    makes sense to turn on/off sync by script in a benchmark series :)
     
    #4
  5. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,662
    Likes Received:
    204
    So when will auto pool creation/destruction/composition based on a config file be added?
    Looking forward to run on my ssd or potential nvme pool;)

    edit:
    typo :
    4.5 A SSD based pool via LSI pass-through (4 x Intel DV 3510 vdev)

    and other places same error
     
    #5
  6. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,580
    Likes Received:
    502
    Spirit is faster than my fingers...
    Will correct them.

    At the moment the whole benchmark series is a voluntary extra task.
    Now one wants to classify results.

    In German we say: Wer misst misst Mist.
     
    #6
    _alex likes this.
  7. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    1,662
    Likes Received:
    204
    Yeah - i thought you probably have most of the stuff scripted anyway - and it might make your next run simpler too;)
     
    #7
  8. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    689
    Likes Received:
    63
    i like benchmarks for the case of seeing other bottlenecks/missconfigurations.
    so, if there is a clear range what should be reached there must be something wrong if own results are magnitudes below ;)
     
    #8
  9. azev

    azev Active Member

    Joined:
    Jan 18, 2013
    Messages:
    406
    Likes Received:
    100
    @gea which nvme driver did you use on your test ?? Native from ESXI installation? or did you install intel NVME driver from vmware website ?
     
    #9
  10. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,580
    Likes Received:
    502
    ESXi native
     
    #10
Similar Threads: performance barebone
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it Optane performance as a SLOG Nov 28, 2017
Solaris, Nexenta, OpenIndiana, and napp-it ZFS settings to improve Windows Search performance Oct 25, 2017
Solaris, Nexenta, OpenIndiana, and napp-it napp-it on OmniOS on ESXi - performance? Jul 19, 2017
Solaris, Nexenta, OpenIndiana, and napp-it Performance testing Sep 25, 2016
Solaris, Nexenta, OpenIndiana, and napp-it Napp-it SuperStorage Server 6048R-E1CR36L Performance May 30, 2016

Share This Page