ZFS SLOG/ZIL drives in 2023

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mattlach

Active Member
Aug 1, 2014
343
97
28
Hey everyone,

Now that Intel's Optane drives have been off the market for long enough that they are getting harder to find reliably, does nayone have any thoughts on what that is out there makes for a good SLOG/Zil drive?

I still have two 280GB Optane 900p's mirrored in my storage server, and while they will likely be fine for some time, I am curious what the future holds here. Should we be clinging to our old Gen3 Optanes, or are there new, better options out there that fir the super low latency requirement?

Appreciate any thoughts.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
Optane are still the best NVMe for an Slog if you use disk based pools. Does not matter if its a 1600, 90x, 48x or 59x. The difference on diskbased pools should not be essential beside longevity. For a fast NVMe pool, you can use ones with powerloss protection and simply enable sync without an extra Slog. An extra Optane Slog can give a minimal improvement. For very fast pools ZFS direct io (under development) that writes once directly without the double write of current sync write will probably be the future.

If you use a dedicated Slog it is important that it is much faster than the pool itself regarding steady small io. Persistent RAM based Slogs (Dram or Optane) is another option for faster pools especially as you only need around 10GB for Slog writecache protection.
 
Last edited:
  • Like
Reactions: BoredSysadmin

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
I think the optanes will be good enough for a long while to accelerate spinning rust.
Newer ssds have incredible random iops and sequential read/write numbers but the performance for random io on low queue depths stagnates around the same level. Optane and dram based ssds (like the rms-200) still outperform the new pcie 4.0 or 5.0 ssds.
This might change in the future when cxl connected memory gets more traction and the storage systems add support for that, but this could take a while until it reaches your or mine homelab :D
 

gb00s

Well-Known Member
Jul 25, 2018
1,190
602
113
Poland
I'm with @i386 . SLOG-wise RMS-200/300s are still the thing. Super cheap now (~50-90EUR in batches for 8GB version). Only downside is the 8GB size which becomes an issue with 100G networking. Super reliable. Still buying broken ones for parts (fans brake often). NVDIMM-N (became affordable now) in 16GB size can still keep up with 100G networking. But can by tricky to get it working hardware wise. Not using anything else anymore for SLOG. Optane PMEM and 4800x only for read cache. I suggest not to use 900P/905P as SLOG, even as cheap as they are atm. No PLP == No PLP.

Future-wise, if you go with well performing regular NVMe storage with PLP and use a distributed filesystem with several storage targets, SLOG is not an issue anymore.
 

dlasher

New Member
Dec 9, 2016
10
0
1
54
I'm with @i386 . SLOG-wise RMS-200/300s are still the thing. Super cheap now (~50-90EUR in batches for 8GB version).
Didn't even realize these existed -- talk about the perfect SLOG device. Just picked up two - they're running exactly in the price range you outlined. Thank you for sharing.

Product Info:
RMS-200 : Edge Card RMS-200 - Radian Memory Systems (DDR3)
RMS-300 : Edge Card - RMS-300 - Radian Memory Systems (DDR4)
RMS-375 : RMS-375 - Radian Memory Systems (U.2 version)

other fun products on their site....
 

mattlach

Active Member
Aug 1, 2014
343
97
28
I'm with @i386 . SLOG-wise RMS-200/300s are still the thing. Super cheap now (~50-90EUR in batches for 8GB version). Only downside is the 8GB size which becomes an issue with 100G networking. Super reliable. Still buying broken ones for parts (fans brake often). NVDIMM-N (became affordable now) in 16GB size can still keep up with 100G networking. But can by tricky to get it working hardware wise. Not using anything else anymore for SLOG. Optane PMEM and 4800x only for read cache. I suggest not to use 900P/905P as SLOG, even as cheap as they are atm. No PLP == No PLP.

Future-wise, if you go with well performing regular NVMe storage with PLP and use a distributed filesystem with several storage targets, SLOG is not an issue anymore.
My understanding is that the Optane 900p and 905p write direct to NAND, so there is no RAM cache to be lost in case of loss of power, and thus no data los.

At least per this old TrueNAS forum thread.

This seems to suggest it is perfectly fine to use them as SLOG devices.
 
Last edited:
  • Like
Reactions: T_Minus

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Didn't even realize these existed -- talk about the perfect SLOG device. Just picked up two - they're running exactly in the price range you outlined. Thank you for sharing.

Product Info:
RMS-200 : Edge Card RMS-200 - Radian Memory Systems (DDR3)
RMS-300 : Edge Card - RMS-300 - Radian Memory Systems (DDR4)
RMS-375 : RMS-375 - Radian Memory Systems (U.2 version)

other fun products on their site....
they run REALLY HOT. Great SLOG device for 10Gbps (at 8GB size). the 16GB size seems hard to find. I have noticed a few RMS-300's showing up on the bay lately, used to be hard to find those too.
 

gb00s

Well-Known Member
Jul 25, 2018
1,190
602
113
Poland
they run REALLY HOT. Great SLOG device for 10Gbps (at 8GB size). the 16GB size seems hard to find. I have noticed a few RMS-300's showing up on the bay lately, used to be hard to find those too.
Wouldn't confirm they run 'really hot'. Mine run 55-60C w/o any airflow or side-by-side with 1 fan covered from the other card. With airflow ~40-45C almost under full load the whole day (EB writes written).

EDIT:
root@pve2:~# nvme --smart-log /dev/nvme0n1
Smart Log for NVME device:nvme0n1 namespace-id:ffffffff
critical_warning : 0
temperature : 59°C (332 Kelvin)
available_spare : 0%
available_spare_threshold : 0%
percentage_used : 0%
endurance group critical warning summary: 0
Data Units Read : 92,852,432,186 (47.54 PB)
Data Units Written : 3,125,194,478,601 (1.60 EB)
host_read_commands : 451,966,934
host_write_commands : 194,739,559,709
controller_busy_time : 108,644
power_cycles : 82
power_on_hours : 39,562
unsafe_shutdowns : 69
media_errors : 0
num_err_log_entries : 129
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Thermal Management T1 Trans Count : 0
Thermal Management T2 Trans Count : 0
Thermal Management T1 Total Time : 0
Thermal Management T2 Total Time : 0
 
Last edited:
  • Wow
  • Like
Reactions: nexox and itronin

mattlach

Active Member
Aug 1, 2014
343
97
28
I am really interested in picking up a couple of these RMS devices. Can anyone speak to how long the caps can maintain data in a powered-off state?

Also, are we concerned at all about capacitor lifespan? The Radian website says they have refined their "super capacitors" such that they have a 6 year life instead of a 3 year life, but so e of these RMS devices are getting older than 6 years old now...

Does CAP replacement need to feature prominently in the maintenance of these things?

Also, had anyone compared sync write performance between these and Optane 900p/905p?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I am really interested in picking up a couple of these RMS devices. Can anyone speak to how long the caps can maintain data in a powered-off state?

Also, are we concerned at all about capacitor lifespan? The Radian website says they have refined their "super capacitors" such that they have a 6 year life instead of a 3 year life, but so e of these RMS devices are getting older than 6 years old now...

Does CAP replacement need to feature prominently in the maintenance of these things?

Also, had anyone compared sync write performance between these and Optane 900p/905p?
Someone did a thread here comparing them to optane it was faster but not OMG faster like Optane itself is.