sas drives and write cache option

ieronymous

New Member
Oct 26, 2021
8
2
3
Hello

I ve being using in a production level some sas2 disks hitachi-wd Dell 2.5inch 10k 1.2Tb drives with zfs (no Zil or Slog extra devices) in a raid 10 level. In a random check to the drives I noticed that the feature of all the drives write cache was disabled. Used the command sdparm -g WCE \dev\sd[a...e] and the outcome was
/dev/sd[a....e] HGST HUC101212CSS600 U5E0
WCE 0 [cha: y, def: 0, sav: 0] WCE probably stands for write cache enabled? The value though is zero and I am wondering

1.Is there any good reason to let it off in a zfs setup (in another raid or filesystem it would be better off?)
2.Can I turn it on while the disks are participants of the array or do I have to umount them /detach from the array first?
3.Does it play a role the type of disk (sas,sata,ata) to whether this feature has to be enabled?

Thank you in advance

PS I know what the cache does / helps to a drive just curious why it has been disabled or am I missing any critical info about it.
 

ieronymous

New Member
Oct 26, 2021
8
2
3
The following discussion is worth a read regarding WCE on drives.
Thank you for the reply but this raises more questions than answers.

Key facts on that discussion were the following
<<<Write cache is a changeable drive setting. WCE is commonly enabled when buying a HDD vendor branded drive,
and disabled when buying a server vendor branded drive as they expect write caching to be handled
by a hardware RAID controller anyway. Notably most modern RAID controllers allows users to control
the drive write cache setting.>>>

<<<Linux disables FUA by default on SATA devices anyway, because of flaky reliability.>>>

So since zfs doesnt like the h/w raid controllers (else IR mode) and it is better (if not mandatory) for the drives to be on a plain hba controller, this means that WCE parameter on sas drives would be a benefit (at least on reading) if enabled.
On the other hand, if I understood correctly, in order to do so, the drive must support both DPO and FUA, though to be enabled as well?
That comes in contradiction with the statement <<<Linux disables FUA by default on SATA devices anyway, because of flaky reliability.>>>

In addition to the above statement this link might help make the confusion even worse (even though mentions sata drives)
[RFC] libata: enable SATA disk fua detection on default - Patchwork

So I dont have a clear answer only more questions now. Bottom line is anybody have it enabled with sas drives on zfs and working ok?

PS My zfs is based on an hba so no caching on the controller and all the disks have the WCE option disabled,
Also can this be done while they are being members of the array and running or do I have to do something first.
Do I need to check if the DPO / FUA are enabled as well ? Should they?
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
3,092
990
113
33
Germany
Hdd cache is dangerous (write acknowledged when data is still in volatile ram and not flushed to permanent storage), disable it and let the raid/zfs handle the caching
 

ieronymous

New Member
Oct 26, 2021
8
2
3
Hdd cache is dangerous (write acknowledged when data is still in volatile ram and not flushed to permanent storage), disable it and let the raid/zfs handle the caching
...true but the read speeds (or is it write) goes down by a significant margin