Writes to ZIL/SLOG and L2ARC

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Has anyone seen data on average ZIL/SLOG writes and L2ARC writes in NAS scenarios? Probably two categories: 1. Workgroup NAS, 2. VM NAS.

I know there are a ton of variables but I am wondering if there is a good rule of thumb in terms of ratios to data written or other environmental factors.

It would be nice to have some data to say that these SSDs are good for mass storage, these for L2ARC and others for ZIL/ SLOG (but what is the minimum).

Maybe someone on here has stumbled upon something similar.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I can see why you would want to do this. I've also thought there is too much emphasis on write endurance on STH lately.
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
AFAIK the only rule for ZIL/SLOG is to have it slightly larger than the size of your usual "flush to disk" requirement. Good write endurance for ZIL/SLOG is ok, but you're generally only going to be putting whatever your network speed is times 5-10 seconds of data onto it at any given point in time e.g. a well configured single GBe link could possibly allow 120MB/s x 8s = 960MB, per flush cycle.

Any size modern SSD you get will happily be able to handle that much data for ages without issue. The biggest issue you'll come across is latency which is where the low latency ZeusRAM devices win out, though this tends to only come up when working with infiniband or fibre networks.

One other point is that ZIL/SLOG is generally only used to make asychronous writes synchronous or when data security is an absolute requirement (no UPS or power loss protection) because it stores the most recent writes to the device and is only used in the case of a power loss for the disks to pick up the write where they left off due to the data in RAM being lost on power down.

As to L2ARC, You wouldn't need it if using a large SSD pool because the disks are fast enough to handle the read/write cycles without issues. They're just a cache anyway. It is recommended to just max out your RAM (in some instances, there's a limit of 128GB though I believe using Solaris doesn't have that)

Source
Only use an L2ARC if your machine can not hold enough RAM to store all the cached objects you need repeated access to. An L2ARC drive will use RAM for its indexing, keep this in mind.

Unless I've got it all horribly wrong, in which case, someone please correct me!
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
@capn_pineapple is correct. I can't really give you an average as it really is workload dependent. But think of it this way, your SLOG is going to eat 100% of all sync writes. They need to be extremely low latency (or at least ack sync writes quicky) and take a lot of writes.

The L2ARC is a lot more complex. It will at a minimum end up full but how often it gets updated is going to depend on how much your data changes and how small it is relative to the amount of random data being read over time (i.e. blocks will get removed to make space for new ones as in any cache). Unlikely it is going to have a massive amount of writes made to it quickly but over time it will, likely a bit more then your normal consumer drive is designed for. Nice thing about L2ARCs is that if it dies it is really not an issue so cheaper is usually better, losing a SLOG on the other hand will cause your pool performance to tank, massively.
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
I can see why you would want to do this. I've also thought there is too much emphasis on write endurance on STH lately.
After reading about write endurance testing on consumer drives, I'm starting to agree with you.

The SSD Endurance Experiment: Only two remain after 1.5PB - The Tech Report - Page 1

TL;DR: The Samsung 840 Pro 256GB model has survived 1.5PB of writes without issue (though it started reallocating bad sectors at 600TB).

2 things this means to me:
1. I now have absolutely no fear about buying used consumer SSDs.
2. I'm not scared that my 128GB 840 pro will die anytime soon, as after 2 years it's had ~5TB writes. Extrapolating the test results means I have 118 years (300TB / 2.5TB/yr) before it even starts reallocating sectors, based on drive writes alone. Other factors could still kill it obviously.

Of course, for a ZIL drive, to me the absolutely critical factor is PLP regardless of if you have an UPS or not, otherwise you're actually increasing your risk of data loss.
 
  • Like
Reactions: Jeggs101