whitey's FreeNAS ZFS ZIL testing

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

eroji

Active Member
Dec 1, 2015
276
52
28
40
I am no ZFS expert but maybe? If your write request is sync then it will write to ZIL as fast as possible and respond with acknowledgement when completed and allows subsequent writes. At the same time, ZFS will write what is buffered in memory to your pool as fast as it can. But what is in your ZIL is a log of the write transactions to your pool. So I assume if your pool is way slow to keep up or your memory fills up, it will cause the whole process to wait. Someone please correct me if this is wrong.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
it would be nice if you could some how share that slog between pools in freenas. That way you could get one high end device and share it with all the pools. Anyone know if that's possible?
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
nice!

Side idea - Is is possible to create a raid0 pool of ssd drives in freenas then have freenas use it as a slog device?
2nd idea - Variation of first - can you create a RAM drive and user it as slog?
 

eroji

Active Member
Dec 1, 2015
276
52
28
40

Benten93

Member
Nov 16, 2015
48
7
8
nice!

Side idea - Is is possible to create a raid0 pool of ssd drives in freenas then have freenas use it as a slog device?
2nd idea - Variation of first - can you create a RAM drive and user it as slog?
I wouldn’t trust a RAID0 as a slog, even if it shouldn’t affect data integrity. If the crash happens during business hours.. have fun :p

Regarding the RAM Drive, it’s the same story, it’s not reliable. But to answer your question, it should be possible.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
nice!

Side idea - Is is possible to create a raid0 pool of ssd drives in freenas then have freenas use it as a slog device?
2nd idea - Variation of first - can you create a RAM drive and user it as slog?
A raid-0 will not help as it doesn't address the problem.

An Slog device needs ultra low latency, high write iops under steady write loads with low qdepth and powerloss protection. Raid-0 only improves sequential performance that is not relevant for an slog.


A ramdisk will not help as it doesn't address the problem.
Slog is not a device to increase performance. Simply disabling sync would be the fastest method. Slog guarantees that a committed write is on disk even in case of a crash during write similar to the intention of a cache/BBU on a hardware raid.

There is a "ramdisk" alike device for slog, the ZeusRAM. Very expensive as the real problem is to guarantee that the RAM content is saved to a persistent storage on a crash.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Have a system setup in the lab with one that I offered to @whitey :)
Whoa I did see a comment that you could get a system up but did not see it offered to me but I may have misinterpreted or missed that. Are there some other disks behind it so we can keep use case similar...FreeNAS system? Hook it up! :-D
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
Whoa I did see a comment that you could get a system up but did not see it offered to me but I may have misinterpreted or missed that. Are there some other disks behind it so we can keep use case similar...FreeNAS system? Hook it up! :-D
OK - between now and Monday I will be in the data center and will get you hooked up. Yes on other disks.
 
  • Like
Reactions: marcoi

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
A raid-0 will not help as it doesn't address the problem.

An Slog device needs ultra low latency, high write iops under steady write loads with low qdepth and powerloss protection. Raid-0 only improves sequential performance that is not relevant for an slog.


A ramdisk will not help as it doesn't address the problem.
Slog is not a device to increase performance. Simply disabling sync would be the fastest method. Slog guarantees that a committed write is on disk even in case of a crash during write similar to the intention of a cache/BBU on a hardware raid.

There is a "ramdisk" alike device for slog, the ZeusRAM. Very expensive as the real problem is to guarantee that the RAM content is saved to a persistent storage on a crash.
@gea, you can see that in my testing I did use/have a ZeusRAM. While it is a good device it's use as a SLOG device seems to be trounced by more modern tech (sas3/NVMe ent class drives) although it still hold's it's own fairly well. I didn't do apples to apples quite so maybe my magnetic pool limited the ZeusRAM's effectiveness. Open to suggestions or I can rinse/repeat testing on identical pool disks WRT capacity.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
ZeusRAM is end of life. It does not use the newest RAM technology and is limited by the SAS interface. I referred only as a sample to show where the problem is with a ramdisk.

The future is NVMe like Intels P3700 or similar. Sadly there is no successor of a ZeusRAM from HGST with PCI-e, fast RAM and 10-20 GB capacity what is enough for an Slog like a http://www.thessdreview.com/our-rev...ve-101-ramdisk-review-500k-iops-ddr3-storage/
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
If this is going to turn into a real SLOG device test then it should be conducted as-such... ie: more than transferring 10gb or 5 minutes of data transfer. If someone is planning to implement a SLOG device to increase performance of their VM pool in ZFS then lets say there are x# VMs relying on the pool + SLOG and that there's always going to be writing going on 24/7 no matter what.

With that said, it's my thought we need to test the SLOG device for at-least the minimum amount of time it takes to put an enterprise drive into Steady State because that's when we'll see the write IOPs drop, and in some drives it's very significant and others (maybe those with cache) not so much.


@whitey how about setting sync to always, looking at some of @Patrick 's articles and some from other sites to see average time to put enterprise SSD (of same/similar or greater) capacity into steady state, and work from that as a starting point of the test.