Are you running on Solaris then? Because yeah that's exactly what I want to do, that would be the ideal config as then I could just make, say, a 20GB partition for ZIL, a much bigger one for L2ARC, plus also make a high-performance SSD pool, etc, and, as you say, all the different purposes would have the benefit of the full stripe of disks. It's highly unlikely that all would be heavily busy at the same time, so that's definitely the preferred way.Why not just partition the 1.2TB? There's no performance penalty, and in fact, you'll likely get better performance as the activity is spread over multiple drives. If the idea is to use this for ZFS, you could create the cache device and the ZIL/ZLOG/etcwhatever by just partitioning the device and only add the partitions. I have done this before with ZFS, it works. ZFS just wants a block device, and a partition will do. I have even used LVM volumes for test.
However the big potential problem, I thought, was that I had read in several places that in Solaris, giving a partition to ZFS causes the OS to disable write caching on that device, because of legacy UFS stuff. (Or at least, not to turn it on when it should be turned on.)
Googling this again now, I found info such as this from the OpenZFS Performance Tuning page:
On illumos, ZFS attempts to enable the write cache on a whole disk. The illumos (nee Solaris) UFS driver cannot ensure integrity with the write cache enabled, so by default Sun/Solaris systems using UFS file system for boot were shipped with drive write cache disabled (long ago, when Sun was still an independent company). For safety on illumos, if ZFS is not given the whole disk, it could be shared with UFS and thus it is not appropriate for ZFS to enable write cache. In this case, the write cache setting is not changed and will remain as-is. Today, most vendors ship drives with write cache enabled by default.
Implying that the write cache won't be disabled if already enabled, but won't be turned on either. One source saying it is disabled is FreeBSD's ZFS tuning guide, stating:
The caveat about only giving ZFS full devices is a solarism that doesn't apply to FreeBSD. On Solaris write caches are disabled on drives if partitions are handed to ZFS. On FreeBSD this isn't the case.
The FreeBSD guys could just be repeating second-hand info, as they don't have a direct interest in Solaris. Or their comments could have been directly related to code inherited from OpenSolaris at the time of the OpenZFS fork.
I've been trying to get confirmation to find out 100% if this is potentially an issue today in Solaris 11.3 (I made this thread in the Solaris forum) but haven't yet heard either way.
So if you're on Solaris and are saying it's fine, then that's great news - exactly what I want to hear. I'm still not quite clear how to check the write cache on a drive
I'll have my Warp tomorrow and plan to do as much benchmarking as possible to hopefully work out the best config.
PS. I just checked the SMART status of all my HDDs and found that Write Cache is enabled on all but one. That one drive is one that I have only ever used in a whole-disk ZFS pool (which I just created a few days ago, having bought the disk used about a week ago.)
This seems to suggest that ZFS doesn't turn the write cache on even when you do give it a whole disk. Unless something else odd is going on with that disk. I believe I can turn the write cache on with hdparm in Linux, so I will try that with that single disk, and also experiment with creating partition/slice-based pools on disks where the cache is already on.
Last edited: