WD Blue drives and the Load Cycle count issue

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Night Shade

New Member
Nov 8, 2018
7
3
3
So I have a FreeNAS system with HGST NAS drives setup in a RaidZ3 and it has worked well for the most part. However transfers between datasets on the pool are atrocious so some of the data I have on the pool that needs to be written and then moved is a PAIN to deal with often dropping to less than 50 MB/s when I can actually do writes from my Desktop at over 150 MB/s from the HDD and at about 400MB/s from a ram drive.

Right now funds have been on the tight side of things with the holidays coming up as well as vehicle repairs but I wanted to increase my speeds since the application is running on the NAS in a jail so storage on the system is best. The cheapest way to do so is with a WD blue drive. Now I know that this is not the best long term solution, the drives can create more vibration, etc, etc. And since this is a single drive that is not holding any information that I am worried about losing I have been willing to take the risk. The case currently weighs in around 100 pounds and is on a solid platform so vibration is going to be a minimal issue but the one major issue is that the drive CONSTANTLY wants to park the heads about every 5 seconds and the WD Idle tool does not work on the newer drives. Even with data being read and written the nature of ZFS even with a single drive the load cycle count will steadily climb up. But I have figured out a way around this with two simple scripts.

I would not advocate using the drives in any type of raid but as a single drive it seems to work pretty well since I can now do transfers at around 150 MB/s the same as I was able to do from my desktop HDD to the pool.

The scripts are as follows.

run.sh
Code:
#!/bin/bash
while true
do
 /mnt/drive/temp/file.sh
 sleep 4
done
file.sh
Code:
#!/bin/bash
ls /mnt/drive/storage

The first script calls the second one every 4 seconds and then waits while the second one simply lists the contents of the directory. The reason why this is set to 4 is that I have found when the directory is smaller it will sneak in some parking of the heads every so often and by reducing it to 4 it seems to have alleviated this. The drive currently has 200 hours on it of which the first 50 were burn in and testing. Once put into the system it began to rack up load cycles at an alarming rate of approx 50 in an hour once data was completely transferred and the drive was put into service. Since the scripting has been running (which it initially was set to run every 5 seconds) it has increased to 387 currently after starting around 300 before the scripting was setup for an increase of about 87 in approx 145 hours of operating time which is a HUGE reduction from what it was previously.

I have setup the initial script to be ran via a "postinit" script but a simple cron job with the output to /dev/null that runs on boot should also work for those people who are using FreeBSD or Linux.

For a storage drive acting as temporary or scratch space I feel that this should work out pretty well in a single drive configuration which will in some ways benefit from the drives ability to attempt to reread files when a error may have happened. The drive is still subjected to standard testing protocols I have setup which includes two short tests each week with a single long test each week and scrubs once a week. I would not however advocate using the drive for any sensitive information since it is very possible that data could potentially be lost for any number of reasons nor should they be used in any type of RAID array except maybe mirrored drives. I also can not warranty that this will work for any other systems but I did want to share my findings with others so they can use it in any way they may find useful. I will also update on a semi regular basis the status of the drive until I take it out of service.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I'm using a few WD blues myself, isn't it easier to just disable the head parking with idle3ctl rather than constantly running a script and interfering with the IO? That's what I've done with all my blues and greens and presto, no more LCC issues.
 

Night Shade

New Member
Nov 8, 2018
7
3
3
The newer blue drives are not compatible with WDidle3 so likely will not work with any other tools either.

Wdidle3 not working on new BLUE drives ;-( Any help?

WD is basically forcing drives to be used in particular situations only even though the basic hardware is the same or similar they can force a user to use a red drive in a NAS situation which costs more even if it will only be used as a single drive.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Thanks for the info - it's been a while since I've bought a green/blue so I wasn't aware of this change; I moved mostly to Toshiba/HGST precisely to avoid the sort of firmware shenanigans WD kept pulling.
 

Night Shade

New Member
Nov 8, 2018
7
3
3
Yeah I understand. I partly bought a drive just to see if I could make this work and the IO of listing the contents is minuscule so it has not been an issue. Plus using it to store data temporarily before I transfer to a pool isn't much different than what I would do on a desktop but the program can run on the FreeNAS which just simplifies my life.