Setting up test bed for ZIL and L2ARC SSDs

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,830
113
In terms of testing, ZIL and L2ARC are easy to test, but harder to test real world. A debate I keep having with folks (for example) is that 100% write workloads are easy to test but incredibly rare.
 

levak

Member
Sep 22, 2013
49
10
8
Deci: I was, but the I saw the price:) I know they are the best that one can get, but it's too expensive and I can't justify it's price. Unfortunately I will have to go with a slower drive.
Patrics: It's true, 100% writes are not real world scenario. The good thing is I already have a working system where I can check read/write ratio and test with the same scenario. I can also get stats how many random/seq transaction happens, so I think I can make a pretty good scenario for testing.

Matej
 

gea

Well-Known Member
Dec 31, 2010
3,183
1,199
113
DE
There are not so many ZIL capable SAS SSDs around that offers

- ultra low latency
- high write iops
- reliability
- advanced powerloss protection

Usually you buy the
8GB HGST SAS ZeusRAM
- best of all but expensive (2500Euro) and 3,5"

200 GB HGST SAS S842, about 800 Euro (I would overprovison the SSD down to 20 GB for a ZIL)
s842 SAS SSD | HGST Storage
-This is what storage vendors like pogolinux are doing for example with Nexentastor boxes when the ZeusRAM is too expensive

compare
SAS SSD | HGST Storage

If you do not need HA or without an expander, the Intel Sata S3700/3710 100/200GB line is the best of all and quite affordable (up from 200 Euro with 100 GB). I would expect that the S3700-200 is faster than the HGST S842-200
 
Last edited:

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
In terms of testing, ZIL and L2ARC are easy to test, but harder to test real world. A debate I keep having with folks (for example) is that 100% write workloads are easy to test but incredibly rare.
Very true but it is also nice to know the bounds of your performance. i.e. If I am doing sync writes I can hit a maximum of 200MB/s sequential or 50 MB/s random 4k at 5000 IOPs. Then compare that to your own workload and decide what needs to be fixed. L2ARC is almost always going to be limited by either RAM or network speed.

That being said ZFS testing is extremely system specific, so what works great on one machine may also not work as well on another. Really need to tune it for your own environment.

Just seems that there are very few "charts" out there. People tend to throw a SLOG on their system even if they don't need it, or wonder why there performance sucks if they do and have some cheap consumer drive. That way you can point them to "Patricks super list of fantastic SLOGs".
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,830
113
Just seems that there are very few "charts" out there. People tend to throw a SLOG on their system even if they don't need it, or wonder why there performance sucks if they do and have some cheap consumer drive. That way you can point them to "Patricks super list of fantastic SLOGs".
Sadly, it is pretty easy these days. Get an Intel DC P3600 400GB, over-provision to 100GB to not have to deal with write endurance. I am fairly sure that for $600 the performance end of the spectrum is taken care of since you have sequential writes that can handle a 10GbE network easily. PCIe HA SLOG we will likely have to wait for the newly released Avago PCIe switches (from the PLX acquisition) to go mainstream.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,656
2,068
113
Sadly, it is pretty easy these days. Get an Intel DC P3600 400GB, over-provision to 100GB to not have to deal with write endurance. I am fairly sure that for $600 the performance end of the spectrum is taken care of since you have sequential writes that can handle a 10GbE network easily. PCIe HA SLOG we will likely have to wait for the newly released Avago PCIe switches (from the PLX acquisition) to go mainstream.
Intel® SSD DC P3600 Series Specifications

"With PCIe Gen3 support and NVMe queuing interface, the Intel SSD DC P3600 Series delivers excellent sequential read performance of up to 2.8 GB/s and sequential write speeds of up to 1700 MB/s."

400gb
ARK | Intel SSD DC P3600 Series (400GB, 2.5in PCIe 3.0, 20nm, MLC)
"Sequential Write 550 MB/s"

1.6TB
ARK | Intel SSD DC P3600 Series (1.6TB, 1/2 Height PCIe 3.0, 20nm, MLC)
"Sequential Write 1600 MB/s"

I assume the 2TB is the one they mention in "up to" but I couldn't get the URL to that spec, so I stopped at 1.6TB. These seem like they'd be cheapest/best for L2ARC, and a 400GB P3700 would be best for a slog (INTEL: Sequential Write 1080 MB/s) then again you may hit near 10GIg with 800gb, I wasn't comparing all capacities :) or price/performance may be worthwhile to get larger P3600.


Is that your experience with the 400gb (550MB/s) or ~1000MB/s?

*I know you, and others have mentioned INTEL doc errors, thus the ?*

What about a 1.2TB P3600 for a SLOG and L2ARC, can you do such a thing?
When/what would be causing performance factors in such a case, the NVME drive itself or PCIE it was in, etc?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,830
113
@T_Minus you are right, the 400GB is more like 500MB/s read! Getting tired obviously. Nice catch. Still faster than most SAS SSDs.