sas drives and write cache option

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ieronymous

New Member
Oct 26, 2021
12
0
3
Hello

I ve being using in a production level some sas2 disks hitachi-wd Dell 2.5inch 10k 1.2Tb drives with zfs (no Zil or Slog extra devices) in a raid 10 level. In a random check to the drives I noticed that the feature of all the drives write cache was disabled. Used the command sdparm -g WCE \dev\sd[a...e] and the outcome was
/dev/sd[a....e] HGST HUC101212CSS600 U5E0
WCE 0 [cha: y, def: 0, sav: 0] WCE probably stands for write cache enabled? The value though is zero and I am wondering

1.Is there any good reason to let it off in a zfs setup (in another raid or filesystem it would be better off?)
2.Can I turn it on while the disks are participants of the array or do I have to umount them /detach from the array first?
3.Does it play a role the type of disk (sas,sata,ata) to whether this feature has to be enabled?

Thank you in advance

PS I know what the cache does / helps to a drive just curious why it has been disabled or am I missing any critical info about it.
 

ieronymous

New Member
Oct 26, 2021
12
0
3
The following discussion is worth a read regarding WCE on drives.
Thank you for the reply but this raises more questions than answers.

Key facts on that discussion were the following
<<<Write cache is a changeable drive setting. WCE is commonly enabled when buying a HDD vendor branded drive,
and disabled when buying a server vendor branded drive as they expect write caching to be handled
by a hardware RAID controller anyway. Notably most modern RAID controllers allows users to control
the drive write cache setting.>>>

<<<Linux disables FUA by default on SATA devices anyway, because of flaky reliability.>>>

So since zfs doesnt like the h/w raid controllers (else IR mode) and it is better (if not mandatory) for the drives to be on a plain hba controller, this means that WCE parameter on sas drives would be a benefit (at least on reading) if enabled.
On the other hand, if I understood correctly, in order to do so, the drive must support both DPO and FUA, though to be enabled as well?
That comes in contradiction with the statement <<<Linux disables FUA by default on SATA devices anyway, because of flaky reliability.>>>

In addition to the above statement this link might help make the confusion even worse (even though mentions sata drives)
[RFC] libata: enable SATA disk fua detection on default - Patchwork

So I dont have a clear answer only more questions now. Bottom line is anybody have it enabled with sas drives on zfs and working ok?

PS My zfs is based on an hba so no caching on the controller and all the disks have the WCE option disabled,
Also can this be done while they are being members of the array and running or do I have to do something first.
Do I need to check if the DPO / FUA are enabled as well ? Should they?
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,362
1,612
113
35
Germany
Hdd cache is dangerous (write acknowledged when data is still in volatile ram and not flushed to permanent storage), disable it and let the raid/zfs handle the caching
 

ieronymous

New Member
Oct 26, 2021
12
0
3
Hdd cache is dangerous (write acknowledged when data is still in volatile ram and not flushed to permanent storage), disable it and let the raid/zfs handle the caching
...true but the read speeds (or is it write) goes down by a significant margin
 

kevindd992002

Member
Oct 4, 2021
122
6
18
Yes and the data is safe.

If you want to try volatile storage try a ramdisk :)
So then why is write caching enabled by default on SATA drives? Does that mean SATA disks are designed to be volatile storage since they are targeted more for consumers?
 

i386

Well-Known Member
Mar 18, 2016
4,362
1,612
113
35
Germany
for performance reasons (and not just on sata hdds)
yes and no; it's a trade off between performance & risk of data loss/corruption. The risk for losing/corrupting data is kinda small but it sucks when it happens to you and your important data :D
 

kevindd992002

Member
Oct 4, 2021
122
6
18
Right, that makes sense. But enabling it in SAS drives doesn't have any difference in enabling it in SATA drives in terms of performance and risk, correct?

I'm running badblocks against a used HGST SAS drive I got and was surprised that the write speed was just 25MBps. I used smartctl to enable the write cache (while badblocks is runing) and the speed went immediately to 200MBps!

The only problem I have now is that I read that this is not a persistent change. Is there a way to make it persistent?