SSDs, ZIL, and L2ARC

Discussion in 'FreeBSD and FreeNAS' started by frogtech, Mar 31, 2018.

  1. frogtech

    frogtech Well-Known Member

    Joined:
    Jan 4, 2016
    Messages:
    1,018
    Likes Received:
    98
    Setup summary:
    • E3-1230 V5, 32GB DDR4-2133 EUDIMM
    • x4 8TB Seagate Archive 5900 RPM
    • x2 SanDisk Extreme Pro 480GB
    So I installed FreeNAS on this setup and I went into it thinking I would stripe the two SSDs and use them as a cache for ZFS, however my very very limited googling has taught me that using a striped volume is not supported and that using more than 1 device results in the caching being round-robin'd? Is this accurate? Could I stripe these two together and use them as 1 cache volume?

    Next question is regarding ZIL, is this necessary? What are the benefits? Would it be better for me to put the Seagate drives in a striped mirror and use 1 SSD for ZIL and 1 SSD for L2ARC?

    I'm aware the Seagate drives aren't really top performers, at some point I'd like to swap them out for HGST NAS drives or a 7200 RPM version but really I just am re-using these for bulk storage at this point and re-using the SSDs since I put an NVMe in my main rig. I might just sell the SanDisks and get some Intel DCS drives or something with more endurance, but for this post, let's assume I'm not swapping any parts out.
     
    #1
  2. BlueLineSwinger

    BlueLineSwinger Active Member

    Joined:
    Mar 11, 2013
    Messages:
    117
    Likes Received:
    43
    You'll see no benefit from either ZIL or L2ARC for a typical home setup.

    L2ARC helps some with reads by caching commonly-accessed files. It supplements the RAM-based ARC, which for you will probably be substantial due to the large amount of RAM installed for a media/SOHO NAS.

    ZIL can help with writes. Again, it supplements RAM. So unless you expect to be hitting the box with a ton of writes at high speed a ZIL SSD won't help any.

    If you have no use for those SSDs in your desktop system, sell them off or use them in builds for friends/family.
     
    #2
  3. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,722
    Likes Received:
    764
    You also need to set parameters for home setup like how fast the l2arc fills
     
    #3
  4. Linda Kateley

    Linda Kateley New Member

    Joined:
    Apr 25, 2017
    Messages:
    21
    Likes Received:
    5
    The way to tell if you will benefit from a ZIL device is to run zilstat. It can show you ongoing zil. If you have any activity at all you will benefit from a dedicated device. FreeNAS has zilstat builtin. Run it with a number of second like

    #zilstat 1
     
    #4
    MikeWebb likes this.
  5. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    235
    Likes Received:
    24
    @BlueLineSwinger

    you are so wrong on zfs its not even funny...

    1 'You'll see no benefit from either ZIL or L2ARC for a typical home setup.'
    wrong... I have a home esxi lab and with VM's running on zfs backed storage zil is a tremdous help on syncwrites.. even though my storage is ssd based arc simillarly helps with reads so I dont have to consume all host ram just for the ZFS host VM..

    2 " ZIL can help with writes. Again, it supplements RAM. So unless you expect to be hitting the box with a ton of writes at high speed a ZIL SSD won't help any"

    wrong.. ZIL writes dont go to ram .. all zil writes go to DISK.. regular ZFS writes (ie not sync writes) go to ram then flushed.. it waits so it can do a larger write in optimal places on the platter.. without a dedicated SLOG (zil device) all sync writes are commited at least twice (more depending on the type of pool)... each sync write goes to disk immidiatly dirty... anywhere the head is if its not taken.. the ack so it can report back that the write was good.. this is the log write.. then it is re-written in a larger block later... this will make swiss cheese out of platter based storage in lots of small write evironments like database, VM, or worst.. torrent. with a ZIL SLOG device (that should be a battery/cap backed SSD) all the sync writes under a certain size are writen to the SLOG AND MEMORY, ack, and then dumped from RAM. the ZIL SLOG is never read touched on regular writes, and its only READ from IF there were an outage. All writes to platters still come from ram, the SLOG is a LOG DEVICE and only replays those writes if the pool comes back up and its not clean.

    so as you can see.. there are plenty of reasons to run both a slog and a zil .. even at home... but more over.. you should really read up on this more if you use ZFS so you know what is going on under the hood and dont corrupt your data or waste money on gear that is not being used correctly
     
    #5
  6. BlueLineSwinger

    BlueLineSwinger Active Member

    Joined:
    Mar 11, 2013
    Messages:
    117
    Likes Received:
    43

    This is hardly a typical home usage scenario. Despite how this particular forum might portray things, your usage is quite on the fringe, and there's no indication that the OP was intending to use the NAS for anything like hosting VM images, DBs, etc.

    And you should be favoring RAM-based ARC over L2ARC, as it's so much faster. So what if it takes up system RAM? ARC only uses what's not reserved for other processes, and will dynamically resize if needed as processes expand.



    You're right, I didn't properly differentiate between sync and async writes. I was thinking of async writes, used for most standard file-sharing activity and do go through typical system RAM caching ("...the ZIL does not handle asynchronous writes by default. Those simply go through system memory like they would on any standard caching system."). For sync writes a ZIL SLOG can certainly help in select circumstances (such as your VM hosting). However, again, those are on the edge for a typical home setup, and my impression was that the OP was not doing anything outside of that.
     
    #6
  7. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    235
    Likes Received:
    24
    @BlueLineSwinger

    look.. I don't care if you want to pre presumptive and arrogant and assume to tell people what they do or don't need or what their particular usage needs are.. that is just bad manners and people are allowed to be rude..

    what I do take issue with is that you factually don't know how ZFS works and are presenting straight falsehoods and calling it fact.. and with ZFS its all too common..

    again.. you presume to tell me ram vs l2arc bla bla.. but you don't know the operations constraints of my setup and resources are finite.. especially ram.. so using l2arc if and where I choose is my business... not yours... is ram faster .. yep.. but that is why Sun invented l2arc.. not all system ram can be dedicated to ZFS and sometimes even that is not enough so l2arc caching on fast media is the next best thing..

    and you misunderstanding of zil and slog are way deeper than sync and async writes.. you presented the zil structure complete wrong

    both sync and async writes both go to ram as you say.. the only difference is that for a sync write.. it ALSO has to be written to the zil.. and that is either on the main pool... or a separate log device SLOG.. and each write has to wait for an ack that it was written in both places before another can be requested.. and if you have no SLOG device.. zil makes swiss cheese of the pool and frag numbers go through the roof

    and data backing in a home envo for torrenting IS mainstream.. and its not a very good choice as any copy on write system sucks for that application but people still use it.

    so keep you impressions, gut feeling or other zen out of it and talk facts and let the user make an educated decision based on fact and not flat out wrongs
     
    #7
  8. niekbergboer

    niekbergboer Member

    Joined:
    Jun 21, 2016
    Messages:
    85
    Likes Received:
    30
    What you're looking for in a ZIL is low QD1 small-block write latency. I run a home server with a number of VMs, as well as root on ZFS, and I use an Intel 900p as ZIL.
     
    #8
  9. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    636
    Likes Received:
    175
    Is the application just basic file serving on a 1gbps network? If so, you probably don't need slog or l2arc. If the server is not running any containers/VMs, 32GB is a fair bit of RAM for caching and should handle things well. How are you sharing files with other machines? If NFS is involved, sync writes may be an issue. As previously mentioned, zilstat is your friend here.

    It sounds like your SSDs are consumer drives. If they don't have power loss protection, or lie about write status, they are not good candidates for slog/zil. Note that is about 90% of consumer drives. Consumer SSDs are great for client machines, but they have real issues in servers beyond the obvious like write endurance. They might work fine for l2arc, but unless the server is really busy you may well not notice any difference with one. If you decide to try it, make sure you tune it for your workload. The defaults can be a bad fit for home users. But those slow spinners might make it more useful.
     
    #9
  10. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    235
    Likes Received:
    24
    @ttabbal provides solid advice..
    @frogtech

    I have been running zfs since about 2008 and I have NEVER needed log devices.. even with my esxi home lab all-in-one let alone a file server.

    I ran/run a media server with zfs on OS X since 2008 and with a 8 disk pool serving over 16tb of data to multiple clients.. did fine with 8gb of ram (not ECC) and no log devices. that 8gb had to run OS X and underlying apps like the media server, OS X server, and a security camera program.

    my current (primary) server at home is a dual cpu l5640 12 core 48gb (ECC) machine. it serves 2 online pools, (1 large media pool on spinners, and a VM pool for esxi on a 2 SSD stripe).. it is obviously esxi based with a ZFS VM (ominios - best ZFS after solaris) providing storage for 6 VMs (media, security cameras, vcenter, etc) over internal NFS while media is served to the media VM via internal SMB. multiple clients hitting media and the security cameras (6) writing 24/7 and no issues WITHOUT log devices. ZFS VM is only allocated 6GB.. thats right 6.. and it does fine.. and it also hosts an offline via a 4E port on the lsi card.. my backup shelf which is a 3xraidz1 stripe of 15 drives ... with everything going.. VMs active.. that pool scrubs at 950MBs saturating the link.. way fast enough for me .. and it writes at over 650MBs if I remember right. incremental backups take seconds... I could improve my VM pool slightly by adding a ZIL SLOG device, but it would only improve fragmentation since the VMs have not been write contained and fragmentation will only cost me a storage space penalty, not speed.. I don't care.. when it frags too back.. I will secure wipe the SSDs, destroy/create the pool again and copy the data back from BACKUP.. done..


    here is the thing with ZFS and nobody around here talks about it the right way... ZFS is OVERLY flexible. its a large bag of tools and an even bigger bag of tuning knobs and switches.. and there is NO BEST WAY to set it up.. and you really don't need a GUI.. I managed my ZFS 8 years ago with like 8 ZFS commands.. and you will learn what you are doing and WHY.

    people are always asking and most reply.. oh you can only set it up as stripped mirrors.. bla bla.. with zfs you have to pick the right tools for you workload, COST, hardware, and RISK tolerance

    for example..
    people complain that you can't expand raidz, well yes you can .. with another raidz... they complain that that is too expensive.. well then don't use it. raidz is NOT backup.. its not even insurance... it was designed ONLY TO KEEP THE DATA ALIVE long enough for admins to switch over to a BACKUP.. i.e. an essential database or website that you don't want to go down if a device drops.. the pool will serve data... raidz is a great structure if you just want some bitrot protection .. say on a backup pool that spends it life offline.. that way if there is some corruption .. it can be repaired.. but if my backup pool faults an entire drive.. I don't do a rebuild from redundancy .. I destroy the pool replace all the drives if they are really old since one failed more are likely.. but once I square the hardware.. build a new pool and just backup to it again from scratch. but I have 2 backups..

    a cost effective way at home is to just run (online) a basic stripe.. thats right.. no redundancy.. its the fastest structure and has ZERO overhead.... add disks when you need them 1 or 2 at a time... I do 2 so that data is read and written to at least 2 disks.. plenty fast for what I need. (over 300MBs) write and that is when all but 2 drives are 90+% full). and read saturates the controller.. . cost effective, and since a run a offline backup pool that is a raidz.. I have plenty of redundancy for failure, betrot, ransomware, etc. When I get to 5 drives and either run out of storage or say.. they are 4 years old.. I will buy say 2 new drives that now have the capacity to replace those 4-5.. copy the data over.. then add the old 5 drives as ANOTHER raidz1 dev to my backup shelf.. voila ... This works for ME.. I have ZERO online cost overhead for my main pools and my BACKUP pools are FREE. it my not be what you want ... but that is what makes ZFS a best in class storage system .. smart storage engineers have great flexibility ...

    biggest takeaway.. mirrors/raidz is NOT BACKUP... its data redundancy to allow UPTIME.. the structure used is more a performance issue.. not how SECURE you non existent backup is... you MUST have a backup pool... period.. that is probably the only hard fast rule with ZFS.. like any other data storage system. you run without a backup pool.. you will eventually loose data.. probably from a il-entered command not hardware failure

    second biggest takeaway.. if you are not reading from SUN/SOLARIS source manuals how to run and admin ZFS.. you are going to loose data.. coming here and getting half baked internet commando advice and trusting your data to it without understanding what you are doing is going to cost you your DATA.. that is a guarantee... zfs is powerful.. and CAN be complex.. and with great power comes great opportunity to screw it up.. keep it simple.. make it more complicated IF YOU NEED TO.

    99% of the people here.. mostly the freenas crowd.. are using ZFS wrong.. way overkill in the hardware department.. and using pools in a less efficient way.

    again.. I can run an entire media server. (my backup is still that 8 year old machine) and its an e8400 with 8GB ram serving (now) as a 2nd backup of data and primary backup for hardware.. with a 8 drive raidz1, compression on, and it can write at network line speed and serve at line speed to multiple clients.. while having enough cpu for compression, checksum, AND transcoding of media on the fly.. and that is 8YO hardware. it scrubs at over 500MBs..

    with ZFS at home.. start off slow.. look at your REQUIREMENTS.. and plan your hardware and pools efficiently and ZFS won't cost a fortune and you won't loose data.. Unless you have money to burn and want geek bragging rights that your $16000 home storage server runs all the latest and greatest optain bla bla.. personally someone that brags about spending $16,000 to do the work of $1000 is a fool .. especially if they brag about it....

    I have not lost a single file in 8 YEARS. that is 8 YEARS... 70,000 HOURS UPTIME.
     
    #10
    Last edited: Apr 12, 2018
    frogtech likes this.
  11. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,361
    Likes Received:
    1,296
    Wow, I haven't seen such an opinionated self-righteousness post on STH, ever. Not to mention the irony in your contradictions, even against yourself.
     
    #11
    CreoleLakerFan likes this.
  12. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    235
    Likes Received:
    24
    @T_Minus


    Opinionated.. yep.. and as I stated ALL OVER it.. it was just one persons opinions on what has worked for ME.. nowhere do I cram that down anyones throat or say you have to do it that way.. so self-righteous.. no that is not accurate

    contradictions .. no .. options..

    like I have said thoughout the post.. ZFS is overly flexible.. there is no 'one right way'.. everything is a tradeoff.. and that is the point I was trying to make ad-nausuim

    Now if I have presented something that if FACTUALLY incorrect.. feel free to educate me and the rest of us by pointing out where I am wrong specifically, and correcting it with FACT and sources .. I never claim to know everything and I learn something new every day..
     
    #12
    Last edited: Apr 12, 2018
  13. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    669
    Likes Received:
    233
    ...apart from civility it seems.

    You'll attract more flies with honey than you will with ranting at them for being insufficiently well informed flies that shouldn't bother to have an opinion.

    There's a technical discussion to be had here on the relative merits of ARC vs. L2ARC (and to ZIL or not to ZIL) but from what I can understand from your ranting, meandering posts, you're not having this sort of discussion at all and are merely rattling off a list of your own hardware and use case.

    OP asked about implementing a ZIL on SSD. BlueLineSwinger didn't answer that question directly (so arguably going offtopic) but did point out that, under most home usage scenarios that one might use a freenas box with four HDDs in it for, that it might be of little appreciable benefit (which is correct if the use case is just as a simple media server). You then launch into a diatribe claiming that a ZIL is essential for your use case and that BlueLineSwinger is somehow a dunce for not understanding this.

    BlueLineSwinger then points out that the OP didn't specify a use case, and agrees that he didn't sufficiently explain his prior post and the finer points of sync vs. async writes, and for no readily apparent reason you start calling them rude? And the further down the rabbit hole you go, you seem to claim that 99% of people don't know how to use ZFS properly and that you, presumably, are the One True Oracle Of ZFS Facts. And yet then you agree with ttabal when he says that a ZIL is probably overkill and - assuming no heavy random IO workload - is probably surplus to requirements (the exact point that you disagreed with BlueLineSwinger making unless I'm mistaken) and that implementing a ZIL on a consumer SSD might not be a great idea.

    A biblical treatise on your ZFS expertise then follows, at the end of which you say that you can run a media server (the use case BlueLineSwinger was assuming) on a limited set of hardware without the use of a ZIL (again, what BlueLineSwinger was saying in the first place). And when T-Minus, entirely appropriately IMHO, points out this jumbled mess of posts as self-aggrandising nonsense, you're quite happy to tell him that no, he's wrong as well.

    Exactly what point you're ever trying to argue is entirely unclear. If you've got some relevant info for the OP then please feel free to post it; until then you're just cluttering up the thread with drivel (although I too am now also guilty of that thanks to you but you've annoyed me with your persistent impoliteness).

    Disclaimer: not a ZFS expert or even a freenas user so I'm quite likely ignorant on some of ZFSs finer technical points. But I come to these forums to read about and discuss these points, not to suffer sanctimonious diatribes.
     
    #13
  14. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    235
    Likes Received:
    24
    @EffrafaxOfWug

    points you make..

    yes the op asked about zil (he really means SLOG since zil is ALWAYS THERE) .. that includes the rest of you that call it ZIL

    my issue with @BlueLineSwinger isn't so much that he immediately tells him he doesn't need it (like he really knows) but that his entire conversation about ZIL and more accurately SLOG is flat out TECHNICALLY INCORRECT.

    so my post to him was 2 fold, straighten out the fact that a SLOG, can, even at home, have a place. It may not be performance, but fragmentation but to summerly rule it out is WRONG. However, as I have shown in my own use and agreed with other posters, likely not need most of the time.. even in enterprise as it only applies in a small set of circumstances.. frequent, SMALL, writes.. that are sync (or are going to be forced sync), that is performance constrained to the application issuing the sync write.

    My longer post we merely to inform that ZFS... is very flexible with many options and settings both overt and under the hood... and as your own post continues to show... most people are ill-informed and are further spouting info that is not fact.

    ZIL has nothing to do with random read-write... nor did I say it did, nor did I anywhere tell anyone that it essential.. so you have reading comprehension issues My ENTIRE long post is how I don't use it at all on my home machines that are in production.. and how I could use it if I wanted but would see little to no performance improvement

    this is @BlueLineSwinger comment on zil.. which has absolutely nothing correct...
    " ZIL can help with writes. Again, it supplements RAM. So unless you expect to be hitting the box with a ton of writes at high speed a ZIL SSD won't help any."

    my issue wasn't with him saying it wouldn't help (although he cant really know the OPs requirement) but that he is flat talking out his ass

    1> ZIL can't help with writes... ZIL is in the pool whether you want it or not..
    2> it supplements ram... more BS ... all writes .. sync and async .. go to ram .. only sync ALSO have to be written to ZIL.. if your pool is on spinners.. each sync write results in 1 write to ram, 1 zil write to DISK that must be acknowledged as written (more if you have a pool with redundancy), then the sync write is flushed from ram to the disk during a flush.. whereby the ZIL on disk is flushed.. lots of additional read/write hammering on the drive for every single small sync write... lots of overhead. SO AS YOU SEE it as nothing to do high speed writes.. as he stated
    3> What everybody MEANS to say is adding a SLOG device so that the pool based ZIL writes can be acknowledged faster, without having to be committed to disk multiple times. A small sync write still goes to RAM but the ZIL log write goes to the SLOG device .. which should be a SSD or other fast, power backed, high write endurance device.. it is NEVER read from.. unless the system crashes then its log is used to replay the uncommitted writes.. it does not supplant or assist ram at all .. ever.. its not a write buffer.. ever... and unless your SLOG is sufficiently faster than the pool .. it CAN actually slow down your sync writes.

    and based on the threads and conversations on this and other ZFS topics.. yes.. if very much seems that 99% don't comprehend ZFS or how to use it efficiently .. again.. my opinion.. so thank you.. I think I will stand by it...

    I also never claim to be the sage source and very well caution the OP and anyone else running ZFS not to admin it by collective googlesearch (to include me) but to read the Sun/Solaris ZFS source documents.. I stand by that advice as well.. thanks Dad


    now the only thing you say here that makes sense is this..

    you have no clue what we are taking about ...

    "Disclaimer: not a ZFS expert or even a freenas user so I'm quite likely ignorant on some of ZFSs finer technical points. But I come to these forums to read about and discuss these points, not to suffer sanctimonious diatribes.:"

    SO THIS IS MY WHOLE POINT

    you come here to LEARN... and if you were to take anything that @BlueLineSwinger has said.. you would learn the wrong things ...

    and as far as ZFS goes.. there is a lot more wrong info here in the forum that there is correct info..

    I AM DONE...
     
    #14
    frogtech likes this.
  15. frogtech

    frogtech Well-Known Member

    Joined:
    Jan 4, 2016
    Messages:
    1,018
    Likes Received:
    98
    Hey dragonme, thanks for the input. I enjoyed reading your posts. The specs of my build probably seem like overkill, but the mobo was a steal and I wanted modern features (as many onboard ports as I needed to fill a supermicro NAS chassis 721TQ), capability to have some sort of embedded storage device (usb 3.0 port on board), ipmi and as much CPU performance as I could get in case I wanted to run a VM or two on top of the storage.

    I also got 32 GB of RAM because I only have 2 slots on the mobo and didn't want to have to upgrade too soon. And the CPU was relatively inexpensive. No doubt it was overpriced compared to say, an E8400, but ECC supporting ITX boards for Penryn/Wolfdale I think are far and few between. Maybe I'm wrong.

    I should probably expand on my OP, most of you are right it's a simple media server but I would like to use a few VMs on it at some point, nothing too intense but say the lighter things that don't necessarily belong on more traditional infrastructure? But yeah for now, just storage.
     
    #15
  16. dragonme

    dragonme Member

    Joined:
    Apr 12, 2016
    Messages:
    235
    Likes Received:
    24
    @frogtech

    Hey don’t get me wrong.. I was not telling you to buy or ditch hardware.. all I was trying to convey.. was that its possible to run ZFS with very little resource.. depending on your use case and requirements....

    Not EVERY zfs box needs, slog, SSD, optane etc etc .. that people around here will say with certainty you need never have it seen a single log, or performance chart to see if you actually do.

    Its easy to over spec and over engineer a solution.. a money is no expense mentality has no place in business and less at home.. being a storage engineer is about right sizing a solution based on requirements keeping costs in check and reliability and usability as the benchmarks....

    Good luck with your build ... I am sure you will be fine... freenas.. especially the latest version has a lot more overhead then other base OS choices like solaris or omnios, but the gui is more user friendly at the expense of resources and the user having to learn commands and the ‘why’ behind what is being done when you click on the buttons... its usage of ram and the need to stripe system cache across all drives is absolutely not smart and can be defeated but it too has downsides...
     
    #16
  17. weust

    weust Member

    Joined:
    Aug 15, 2014
    Messages:
    225
    Likes Received:
    18
    I know this topic hasn't been posted in for a while now, but I have a use case might fit in this topic.

    I want to build a home ZFS server which will serve the following:
    1. Remuxed Blu-rays in MKV format for use with Kodi (LibreELEC on a RPi) (my own bought Blu-rays, btw)
    2. AIFF audio files for use with a streamer or PC use with a music player (all personally bought CDs or download)
    3. Some data files. Nothing worth mentioning

    My plan is to make one pool of eight HDDs, and create three datasets and share those datasets with NFS (for the MKVs) and SMB for the AIFF and data files.

    The idea about using log and cache SSDs (and this is the question because my through train might be totally wrong) is that when I watch a MKV or listen to AIFF file, it will use the cache drive for each file.
    Writing goes to the log drive, which is nice when copying the MKV files or several CD rips and downloaded high-res audio.

    What I don't know, or understand, is when will the cache and log drives be used.
    Does this have to do with a minimal or maximum file size, or something else entirely?

    My thought is that reading/streaming a ~30GB MKV file from the cache SSD would be nicer then having eight HDD's at full power usage.
    Also, having a mirror for the log drives makes sense, but a mirror for the log drive (in my opinion) not so much, in my case.

    I hope anyone can shed some light on this.
     
    #17
  18. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    669
    Likes Received:
    233
    Someone better versed in ZFS than me will hopefully correct me but that sort of workload (i.e. primarily sequential reads of large linear files) won't see a great deal of benefit from an L2ARC (read cache) and even less benefit still from a ZIL/SLOG (primarily there to improve random IO and especially sync writes, which it doesn't look like you're using or need to use).

    Reads will still come from and go to the discs - there won't be any read-the-whole-file-into-SSD-and-play-from-there, at least not out of the box; I could be wrong but I seem to remember the L2ARC is primarily intended to speed up random reads and thus prefers to cache "hot blocks" to help with random IO. Files that are usually accessed sequentially would be more than capable of coming fast of the discs without causing thrashing so they'be be low contenders for populating the L2ARC.

    From a relatively old post here:
    So it's likely that L2ARC usage can be tweaked to better speed up seq workloads but I think it'd be up to your testing to see if that were neccesary.

    Eight discs on their own in a RAIDZ with a sequential should be capable of saturating a 1Gb/s network interface without the need for a caching layer. Personally I'd say build up your server/array and test your proposed workload and see if performance meets expectations - you can always add an L2ARC or ZIL later if they turn out to be needed.
     
    #18
  19. weust

    weust Member

    Joined:
    Aug 15, 2014
    Messages:
    225
    Likes Received:
    18
    I should have mentioned I understand that ZIL/SLOG will help with small files, though I didn't know (or forgot) that is was block based and not file based.

    Thanks for the reply, it helps making sense of things for me.

    Currently, in my Synology 4-bay NAS I can saturize it's 1Gbit/s NIC, so eight drives should easily be able to do it as well.
    The 8 drives would be in a RAIDZ2 setup.

    I have tested a simpler setup with 5 drives in RAIDZ2 using a 20GB log and 20GB cache drive, and noticed after the 20GB log disk was full, performance plummeted. So I know I would need a lot more when copying several MKV files at once :)

    When I've set things up again, I will do testing.
     
    #19
  20. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,361
    Likes Received:
    1,296
    What you're describing really isn't making sense unless you're also running VMs or Databases there should be no improvement for "small files" due to addition of SLOG device, and you'd also need to have sync enabled or set to always.

    - SLOG device is not 'read from' during normal operation, only a problem / bad power-off shut down.
    - Unless you turned sync=always you won't even hit a SLOG with movies/streaming, and you don't need it either for this purpose.
    - L2ARC can be configured to cache sequential reads, make sure you have enough RAM if you're going to cache a lot of data.
     
    #20
Similar Threads: SSDs L2ARC
Forum Title Date
FreeBSD and FreeNAS ZFS and SSDs Aug 15, 2015
FreeBSD and FreeNAS ZFS - all SAS SSD- SataIII spare and L2ARC impact- How would you configure this array? Oct 14, 2016
FreeBSD and FreeNAS SLOG and L2ARC advice Jun 3, 2016
FreeBSD and FreeNAS Writes to ZIL/SLOG and L2ARC Jan 31, 2015

Share This Page