You clearly do not understand the basics of ZFS and your comments are dangerous for anyone else reading this thread who may be getting into ZFS and researching their SLOG device.Hardware raid cards dont have to be used for raid : )
configure it as a 1-drive raid 0 (or 2-drive raid 1 if you prefer redundancy), and you can benefit from the writeback cache on the raid card. It would be interesting to see what provides better performance: a 400gb dc s3700 as a "1 drive hardware raid 0", or a much faster nvme ssd.
Also I bring up some of these use cases because the workloads that I am more familiar with are in some ways similar, so perhaps those performance stats are also relevant (but perhaps not).
You want ZFS exposed to the disks directly so it can have as much information as possible. You don't want caching done that ZFS doesn't know about, you don't want disks managed by hardware RAID even if it's JBOD or as 1-drive raid 0s.
It wouldn't be interesting because if you knew about those drives and how SATA and NVME work you would understand that a SATA SSD even with Hardware RAID Card with 2GB cache in front can't keep up with NVME sustained over period of time. (not even getting into discussion of mixed work load differences which for most users is a factor in deciding a configuration)
Intel SATA S3700 - 35,000 IOPs Write (SRC: Intel SSD DC S3700 Series Enterprise SSD Review | StorageReview.com - Storage Reviews)
Intel NVME P3700 - 170,000 (SRC: Intel SSD DC P3700 2.5" NVMe SSD Review | StorageReview.com - Storage Reviews)
The Lower capacity p3700 will be down to around 80k write, still double the SATA drive.
Once the 2GB cache ran out on the S3700 the drive itself could not keep up with the cache flushes. The NVME not only has that much cache onboard (or more, forget exact #) already it has 2x (minimum) the write performance and reduced latency. SATA is also limited to 1 command queues with a depth of only 32 where-as NVME can go up to 65k on each.
People have tried using the Hardware RAID 2GB/Cache as a SLOG device but the capacity isn't enough for those who likely need the increased perofrmance, needing 8-16GB or more for today's fast networks seen in enterprises and the hassle of using it like this if you can get it to work as expected.
Just to be clear no one is saying ZFS is faster than hardware RAID card or any other file system for that matter, that's not the point of this discussion or ZFS for that matter.
Last edited: