A glimpse in the future of slogs ...

Discussion in 'FreeBSD and FreeNAS' started by Rand__, Jul 14, 2019.

  1. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,463
    Likes Received:
    500
    ok, not true, its actually the present... thought i should share;)


    diskinfo -citvwS /dev/nvd0
    512 # sectorsize
    375083606016 # mediasize in bytes (349G)
    732585168 # mediasize in sectors
    0 # stripesize
    0 # stripeoffset
    INTEL SSDPE21K375GA # Disk descr.
    PHKE7510005K375AGN # Disk ident.
    Yes # TRIM/UNMAP support
    0 # Rotation rate in RPM


    I/O command overhead:
    time to read 10MB block 0.010853 sec = 0.001 msec/sector
    time to read 20480 sectors 0.432072 sec = 0.021 msec/sector
    calculated command overhead = 0.021 msec/sector

    Seek times:
    Full stroke: 250 iter in 0.026141 sec = 0.105 msec
    Half stroke: 250 iter in 0.011957 sec = 0.048 msec
    Quarter stroke: 500 iter in 0.018267 sec = 0.037 msec
    Short forward: 400 iter in 0.016424 sec = 0.041 msec
    Short backward: 400 iter in 0.018157 sec = 0.045 msec
    Seq outer: 2048 iter in 0.060869 sec = 0.030 msec
    Seq inner: 2048 iter in 0.046046 sec = 0.022 msec

    Transfer rates:
    outside: 102400 kbytes in 0.105981 sec = 966211 kbytes/sec
    middle: 102400 kbytes in 0.090489 sec = 1131629 kbytes/sec
    inside: 102400 kbytes in 0.111131 sec = 921435 kbytes/sec

    Asynchronous random reads:
    sectorsize: 1341848 ops in 3.000216 sec = 447250 IOPS
    4 kbytes: 1343147 ops in 3.000109 sec = 447699 IOPS
    32 kbytes: 178116 ops in 3.002087 sec = 59331 IOPS
    128 kbytes: 46179 ops in 3.008889 sec = 15348 IOPS

    Synchronous random writes:
    0.5 kbytes: 32.2 usec/IO = 15.2 Mbytes/s
    1 kbytes: 32.4 usec/IO = 30.1 Mbytes/s
    2 kbytes: 33.4 usec/IO = 58.4 Mbytes/s
    4 kbytes: 25.8 usec/IO = 151.4 Mbytes/s
    8 kbytes: 33.2 usec/IO = 235.3 Mbytes/s
    16 kbytes: 42.2 usec/IO = 370.4 Mbytes/s
    32 kbytes: 56.2 usec/IO = 556.2 Mbytes/s
    64 kbytes: 86.7 usec/IO = 720.8 Mbytes/s
    128 kbytes: 137.1 usec/IO = 911.6 Mbytes/s
    256 kbytes: 215.2 usec/IO = 1161.6 Mbytes/s
    512 kbytes: 360.2 usec/IO = 1388.0 Mbytes/s
    1024 kbytes: 667.9 usec/IO = 1497.3 Mbytes/s
    2048 kbytes: 1221.1 usec/IO = 1637.8 Mbytes/s
    4096 kbytes: 2388.8 usec/IO = 1674.5 Mbytes/s
    8192 kbytes: 4719.0 usec/IO = 1695.3 Mbytes/s

    diskinfo -citvwS /dev/pmem0

    512 # sectorsize
    17179865088 # mediasize in bytes (16G)
    33554424 # mediasize in sectors
    0 # stripesize
    0 # stripeoffset
    PMEM region 16GB # Disk descr.
    9548ADD1D6FC0231 # Disk ident.
    No # TRIM/UNMAP support
    0 # Rotation rate in RPM


    I/O command overhead:
    time to read 10MB block 0.002227 sec = 0.000 msec/sector
    time to read 20480 sectors 0.026084 sec = 0.001 msec/sector
    calculated command overhead = 0.001 msec/sector

    Seek times:
    Full stroke: 250 iter in 0.000439 sec = 0.002 msec
    Half stroke: 250 iter in 0.000425 sec = 0.002 msec
    Quarter stroke: 500 iter in 0.000830 sec = 0.002 msec
    Short forward: 400 iter in 0.000622 sec = 0.002 msec
    Short backward: 400 iter in 0.000692 sec = 0.002 msec
    Seq outer: 2048 iter in 0.002606 sec = 0.001 msec
    Seq inner: 2048 iter in 0.002542 sec = 0.001 msec

    Transfer rates:
    outside: 102400 kbytes in 0.014434 sec = 7094361 kbytes/sec
    middle: 102400 kbytes in 0.013545 sec = 7559985 kbytes/sec
    inside: 102400 kbytes in 0.013614 sec = 7521669 kbytes/sec

    Asynchronous random reads:
    sectorsize: 1867310 ops in 3.000057 sec = 622425 IOPS
    4 kbytes: 1589498 ops in 3.000047 sec = 529824 IOPS
    32 kbytes: 935622 ops in 3.000054 sec = 311868 IOPS
    128 kbytes: 328937 ops in 3.001158 sec = 109603 IOPS

    Synchronous random writes:
    0.5 kbytes: 1.6 usec/IO = 299.9 Mbytes/s
    1 kbytes: 1.7 usec/IO = 589.9 Mbytes/s
    2 kbytes: 1.7 usec/IO = 1143.4 Mbytes/s
    4 kbytes: 1.8 usec/IO = 2135.6 Mbytes/s
    8 kbytes: 2.4 usec/IO = 3244.6 Mbytes/s
    16 kbytes: 3.7 usec/IO = 4192.4 Mbytes/s
    32 kbytes: 9.3 usec/IO = 3344.5 Mbytes/s
    64 kbytes: 12.3 usec/IO = 5088.3 Mbytes/s
    128 kbytes: 17.6 usec/IO = 7119.2 Mbytes/s
    256 kbytes: 27.7 usec/IO = 9021.8 Mbytes/s
    512 kbytes: 46.6 usec/IO = 10731.7 Mbytes/s
    1024 kbytes: 84.4 usec/IO = 11853.0 Mbytes/s
    2048 kbytes: 159.5 usec/IO = 12535.5 Mbytes/s
    4096 kbytes: 314.3 usec/IO = 12726.1 Mbytes/s
    8192 kbytes: 621.4 usec/IO = 12873.4 Mbytes/s
     
    #1
    nephri likes this.
  2. nephri

    nephri Active Member

    Joined:
    Sep 23, 2015
    Messages:
    494
    Likes Received:
    81
    very good throughtput and latencies ^^
    Writing is faster than reading, is it normal ?

    you benchmarking a memory module and it let me thinking about a benchmark i did also today on the FreeBsd ARC throughtput !

    What i did:

    creating a xxxx pool with
    • 4x SSD SAS 12Gb/s in stripping mode
    • disabling compression (because i mainly set only 0 on files and didn't want to bench lz4 ^^)

    writing
    dd bs=128k if=/dev/zero of=/mnt/xxxx/test count=32000

    measuring non disk use
    zpool iostat xxxx 2

    reading from the arc
    dd bs=128k if=/mnt/xxxx/test of=/dev/null

    so, i can see from the zpool iostat that the reading command did'nt perform any io call to the disks'pool and ensure all was took from the ARC
    But the dd result give me at best a throughput around 1.8 GB/s (that is far far the 7 GB/s you have from the PMEM)

    I found theses values relatively low...

    So, with PMEM, you try to optimize writes IOPS/throughtput !
    Did you already benchmarking ARC and/or L2ARC for maximizing reads ?

    PS: i avoid using /dev/random since it is very slow...

    Séb.
     
    #2
  3. nephri

    nephri Active Member

    Joined:
    Sep 23, 2015
    Messages:
    494
    Likes Received:
    81
    I'm trying the same command (diskinfo) on a disk (not used inside a pool)

    Code:
     diskinfo -citvwS da73
    but i have an error when testing synchronous writes

    Code:
    Synchronous random writes:
             0.5 kbytes: diskinfo: Sync write error: Bad file descriptor
    
    Any advice ?
     
    #3
  4. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,463
    Likes Received:
    500
    Weird that your arc test was so low - have you tried a real memdisk yet?

    and no idea whats wrong with the diskinfo on your disk - the only issues i have is when disks were in a pool before, then i tend to get
    "diskinfo: /dev/nvd1: Operation not permitted" until i wipe it via the legacy gui; but no actual faults when writing.


    I have not run too many tests to be honest since I spent a lot of time on firmware issues with these which i finally resolved today (will need to do a writeup of that ordeal :/).
    As you might know my goal is low QD writing but despite very promising diskinfo values I am not there yet - but have not had the time to get a proper setup going.
     
    #4
  5. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,931
    Likes Received:
    856
    What are you testing that on?
     
    #5
  6. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,463
    Likes Received:
    500
    Atm a X11SPH-nCTPF with a 6150QS.

    Or did you mean @nephri ?:)
     
    #6
  7. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,931
    Likes Received:
    856
    Oh @Rand__ I meant what device were you testing?
     
    #7
  8. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,463
    Likes Received:
    500
    Ah, i thought the white on white riddle was easy enough;)
    Its a Micron DDR4 16GB 2667 nvdimm module (pre Optane). The top one is a P4800X.
     
    #8
    Last edited: Jul 14, 2019
  9. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,656
    Likes Received:
    397
    Do you have a part number or do you know if it's a nvdimm based on the jedec standards?
     
    #9
  10. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,463
    Likes Received:
    500
    Can provide it tonight.
    Nvdimm are physically slightly different as they have a connector for the power gem ( backup battery)

    Edit - here is a pic


    Micron mta18asf2g72pf1z-2g6v2

    Just be aware that you do need the Power Gem to make this Non-volatile, and those are hard to come by/expensive and have compatibility issues.
    Got 2 from Ebay which seemed to fit but have heard that they are not compatible (have not tested them, got it directly from my contact the the manufacturer's)

    Edit2:

    Also, similar to memory modules there are DDR3, DDR4-2133 & DDR4-2666 modules and all might have different power gem requirements. That is not entirely clear and documentation is very sparse.

    Also compatibility is ... difficult. Theoretically all Skylake and Supermicro Dual CPU 2011-3 boards support it, but as suggested above ('ordeal') it was far from easy.
    Not actually running them (however well that might be working atm), but getting them to update to the latest firmware (which I could not have done without a very very very helpful gal at Micron's), so these are definitely *NOT* plug & play.
    Maybe Optane Dimms are better in that regard
     

    Attached Files:

    #10
    Last edited: Jul 15, 2019
    Patrick likes this.
  11. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,463
    Likes Received:
    500
    Just an excerpt from a long running series of tests to whet your appetite;)

    --bs=4k --iodepth=32 --numjobs=16

    Code:
    -> Child 659: Will now execute
    fio  --direct=1 --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --runtime=600 --group_reporting --size=10G --name="p_sin_mir02_v01_o00_cno_sss/ds_128k_sync-always_compr-off-most"  --bs=4k --iodepth=32 --numjobs=16 --rw=randwrite --filename=/mnt/p_sin_mir02_v01_o00_cno_sss/ds_128k_sync-always_compr-off-most/fio_1.out --output /mnt/p_sin_mir02_v01_o00_cno_sss/ds_128k_sync-always_compr-off-most/fio_1_json.out --output-format=json 2>&1 |tee ./98606.log/fio_1.out
    2019-07-24-02:07:18,3>  Child 653 Running gstat now
    2019-07-24-02:07:18,3>  Child 652 Running iostat now
    2019-07-24-02:07:23,3>  Child 655 Running temp check now
    2019-07-24-02:07:23,2>will check drive temps
    2019-07-24-02:07:23,3>will check drive temps for disk /dev/nvd0
    executing command: smartctl -a /dev/nvd0 |grep -E "ID#|^194|^Current Drive Temperature:" failed with 256, , ^C
    root@freenas[/tmp]# zpool iostat 1
                                    capacity     operations    bandwidth
    pool                         alloc   free   read  write   read  write
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G     11     10   107K   184K
    p_sin_mir02_v01_o00_cno_sss  60.3G   384G  2.59K  19.2K   211M   830M
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G      0      0      0      0
    p_sin_mir02_v01_o00_cno_sss  60.3G   384G  6.21K  21.3K   795M   988M
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G      0      0      0      0
    p_sin_mir02_v01_o00_cno_sss  60.2G   384G  6.39K  22.2K   818M  1.01G
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G      3      0  16.0K      0
    p_sin_mir02_v01_o00_cno_sss  60.3G   384G  5.50K  20.3K   704M   939M
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G      0      0      0      0
    p_sin_mir02_v01_o00_cno_sss  60.3G   384G  6.36K  20.6K   813M   987M
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G      2      0  11.7K      0
    p_sin_mir02_v01_o00_cno_sss  60.2G   384G  5.44K  18.8K   696M   927M
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G      0      0      0      0
    p_sin_mir02_v01_o00_cno_sss  60.3G   384G  5.69K  19.8K   728M   893M
    ---------------------------  -----  -----  -----  -----  -----  -----
    freenas-boot                 4.12G   145G      2      0  20.0K      0
    p_sin_mir02_v01_o00_cno_sss  60.3G   384G  5.95K  19.3K   761M   978M
    ---------------------------  -----  -----  -----  -----  -----  -----
    
     
    #11
Similar Threads: glimpse future
Forum Title Date
FreeBSD and FreeNAS Future proof upgrading NAS4free/freenas box - ZFS ram Mar 1, 2016

Share This Page