1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

whitey's FreeNAS ZFS ZIL testing

Discussion in 'FreeBSD and FreeNAS' started by whitey, Sep 25, 2017.

  1. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,366
    Likes Received:
    730
    @T_Minus my working dataset IS sync=always because I am using NFS for my VMware datastores. I ran 3 VM's each on a seperate host of my 3 node vSphere cluster totaling 180GB of data so I KNOW I am WAY outside of my memory bounds (ARC cache) and no L2ARC in the mix. I DID try FreeNAS iSCSI volumes and forcing sync=always and that did indeed force synchronous writes to the SLOG in that mode (you can see it live by turning sync on and off during sVMotion on that iSCSI zvol) whereas by default iSCSI will go async and you will see NO SLOG usage. What I love about NFS really, SLOG is nearly the perfect use case for VMware/Virtualization on NFS.

    What is odd and I can't seem to wrap my damn head arnd is this...had two more HUSMM 200GB SAS3 drives show up today, new SLOG dev's, tried to add to my 4 device HUSMM 400GB devices raidz pool in stripe SLOG mode and immediately saw writes evenly distributed to each SLOG device but the overall throughput to the SLOG whether 1 device or two was equal (about 200MB/sec). Creepy to see 1 dev happily soaking in 200MB, think your gonna add another stripe SLOG in and watch log performance double and each dev drops to 1/2 of what one dev was doing...TOTALLY a wtf moment.

    EDIT: I gotta say, while the P3700 is the SLOG king IMHO for what I could get my grubby hands on a 400GB P3700 is not cheap and w/ a $80-100 200GB HUSMM sas3 device providing 200MB SLOG write throughput v.s. 300MB for a P3700 the cost difference is staggering for the bang for buck you get (or don't).

    2cents, here's to hoping that Intel Optane DC P4800X test is out of this world, anyone else tested one of those yet as SLOG w/ a similar setup/config/use case?
     
    #61
    Last edited: Sep 29, 2017
  2. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    5,842
    Likes Received:
    1,144
    @whitey I'm not saying you're testing is non-sync. I'm saying we need to make sure others testing are doing sync=always and results are during steady-state not simply a minute or two of transfer.

    We all know even consumer drivers are fast for a few minutes then they can't handle it and drop off.

    What I'm saying is 100GB transfer with sync=always is extremely unlikely to put any but the cheapest drive into steady state.

    Like other review sites, and companies (Intel specifically comes to mind) benchmark SSD performance the same must be done (especially) for a SLOG... testing must be done after the drive has reached steady state.

    Intel SSD DC S3500 Review (480GB): Part 1

    S3500 480GB - Look @ #s at 400s, then look at 1000s, and 1400s leveling out.

    Just because a SLOG device can handle 1 minute sustained usage doesn't mean it's the best for constant usage is what I'm trying to say in way too many words ;) lol.


    Would love to see a SLOG test that records data points, and goes out to 1500 or 2000 seconds of transfer.

    Thoughts?

    Or how many GBs have you done or TBs?
     
    #62
  3. marcoi

    marcoi Active Member

    Joined:
    Apr 6, 2013
    Messages:
    545
    Likes Received:
    66
    any updates on Intel Optane DC P4800X testing?
    looks like the Optane 900P are rolling out shortly. Im wondering if that might be a good slog as well?
     
    #63
  4. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    10,044
    Likes Received:
    3,315
    Yea I put a bad SSD in there yesterday that is throwing fits.
     
    #64
  5. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,366
    Likes Received:
    730
    Waiting patiently in the wings for access :-D
     
    #65
  6. marcoi

    marcoi Active Member

    Joined:
    Apr 6, 2013
    Messages:
    545
    Likes Received:
    66
    any updates?
     
    #66
  7. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    10,044
    Likes Received:
    3,315
    @BackupProphet was fixing the box yesterday under FreeBSD 11.1.

    The machine I believe has a S3610, S3700, P3700, Optane P4800X, Toshiba PX02 SAS3 SSD in it. I also installed 3x 15K RPM SAS3 hard drives for a storage pool. Next step is to change this to CentOS. May add a few more SAS3 SSDs in there in the meantime.
     
    #67
  8. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,366
    Likes Received:
    730
    Yeah we can put a fork in this one for this thread. I think I am satisfied seeing recent results for my use case with what I have on hand. Interested to see results of ZLOG testing thread.
     
    #68
  9. marcoi

    marcoi Active Member

    Joined:
    Apr 6, 2013
    Messages:
    545
    Likes Received:
    66
    @Patrick -Do you plan to do a write up article on the testing or just post on the forums? It would be awesome to have an article. If you decide on an article, would it be possible to see:
    1. bench marking each slog with data pool. any benchmark tools would work. (w/sync= always on pool)
    2. Real world experiments such as vmotion using both iscsi and nfs connections via 10GB or higher.
    3. Cost per performance of each drive.
    4. Two pools, one spinners the other ssd, then 1 slog device partitioned into two and shared between the two pools, then testing both pools at the same time. This is more of a way to justify high costs (at least in my mind) of a slog drive if it can be partition to smaller sizes IE 50-100GB and each partition be used as a slog for multiple pools. I'm not sure how or if this will work, but it would be cool to test if it possible and then what the results might be.

    Thanks,
    marco
     
    #69
  10. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    10,044
    Likes Received:
    3,315
    Marco we will likely do some of that. Only so many resources to put on this stuff.
     
    #70
Similar Threads: whitey's FreeNAS
Forum Title Date
FreeBSD and FreeNAS FreeNAS 11 CIFS 10GB write speed Saturday at 8:48 PM
FreeBSD and FreeNAS What's the best alternative to FreeNAS? Saturday at 12:17 PM
FreeBSD and FreeNAS Strange FreeNAS sharing problem Sep 26, 2017
FreeBSD and FreeNAS FreeNas script for a convenient way to report installed disks Aug 28, 2017
FreeBSD and FreeNAS Sick and tired of FreeNAS, need alternates>>> Aug 18, 2017

Share This Page