Hitachi 6TB NAS - make quieter?

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I've never cared when running these in the datacenter, but at home, they seem rather loud and "clickety". Is there a way to make them less loud? I remember on some desktops there was a mode to performance or quiet, etc... it's loudest under load of course, quiet while idle ;) They are definitely louder than my RE4s an whatnot.

From Linux commandline if possible ;)
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Oh well, these drives may not have AAM. These are the deskstar NAS (0S03839) aka HGST HDN726060ALE610 drives.

[root@basement ~]# hdparm -M /dev/sdi

/dev/sdi:
acoustic = not supported

Clicky it is. I can close the door...
 
  • Like
Reactions: Lance Joseph

FMA1394

Active Member
Jan 11, 2013
624
186
43
Just put some deltas pointed at them. Then you won't hear the drive clicking anymore.

Cheers. :eek:

Alternatively, show some bravado/class and use Nidec Servos instead.
 
  • Like
Reactions: Chuckleb

Lance Joseph

Member
Oct 5, 2014
79
32
18
[root@basement ~]# hdparm -M /dev/sdi

/dev/sdi:
acoustic = not supported

Clicky it is. I can close the door...
Does it sound like the hard drive heads are frequently parking?
You may be able to set the APM threshold on the drives.

If you run "smartctl -x -d sat /dev/sdi | grep APM" and there's a value, then I would tune it.
To disable power management completely (and prevent the heads from parking), use "hdparm -B255 /dev/sdi"

I did this over the weekend when I found the SMART attribute "Load_Cycle_Count" on my 3T drives was in the tens of thousands!
Afterwards, no more clicking noises and no more additional wear and tear on the drives :)
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Just checked, it's already disabled. Cool trick though, adding to collection

[root@basement ~]# smartctl -x -d sat /dev/sda | grep APM
APM feature is: Disabled
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,266
428
83
I did this over the weekend when I found the SMART attribute "Load_Cycle_Count" on my 3T drives was in the tens of thousands!
On linux I've always used idle3-tools to knock this one on the head for WD drives but it looks like recent versions of hdparm also have the ability to turn this off. Is the hdparm setting persistent or does it need to be run each boot?

Code:
idle3ctl -d /dev/sda
Idle3 timer is disabled.
IIRC the WD drives need a full power cycle for this to take effect. I'd be interested to see if there's any correlation with HGST drives trying to do a similar thing...
 
  • Like
Reactions: Lance Joseph

Lance Joseph

Member
Oct 5, 2014
79
32
18
Is the hdparm setting persistent or does it need to be run each boot?

Code:
idle3ctl -d /dev/sda
Idle3 timer is disabled.
IIRC the WD drives need a full power cycle for this to take effect. I'd be interested to see if there's any correlation with HGST drives trying to do a similar thing...
From what I understand, hdparm may need to be run on each boot.
Idle3ctl sound like a neat tool, I'll check it out! Thanks

I've started keeping a dump of SMART logs so that I can track changes over time.
I don't know of any tools out there that can already do this but here's my crude method:
# for f in /dev/sd*[^1-9] ; do smartctl -a -d sat $f >> /root/smartdump-`date +"%m-%d-%y"`.txt ; done

Eventually, I'd like to dump the logs, parse the data into a database, then run queries to track various changes.

Anyways, I've kept these logs and have found that the Load_Cycle_Count has not increased on my drives since using hdparm. Also I should note that my system has not been power cycled.
 
  • Like
Reactions: Chuckleb

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Ah yes, data analysis on pols of hdds. I keep wanting to do this, maybe I should. We use collectl and graylog to do quick data analysis of data.... I'll have to see if I can replicate for home.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,266
428
83
A more elegant solution might be to use smartmontools in daemon mode (i.e. smartd) which'll keep a set of smart attributes (in /var/lib/smartmontools on debian IIRC) - these are typically much easier to pull attributes from than running smartctl manually every time; for one thing, smartctl will almost always have to run as root - easier to just run smartd and pull values from the state files as a regular user. Should be a relatively simple matter to set up a cron job to pull certain attributes from each file every day or so and plug them into a table/BDB/sqlite/whatever and pipe them out to gnuplot or similar...

Edit: Found a chap doing the smartctl command line method into gnuplot here: Japanese Soapbox: Graphing HDD health with smartctl
 

rubylaser

Active Member
Jan 4, 2013
842
229
43
Michigan, USA
Graphing smart data is awesome, but the only downside can be if you want to spin your disks down. Having smartd constantly polling your disks for smart data always keeps them spun up. There are configuration options that prevent smartd from querying your disks if their status is standby. I cover this here.

#60 Spin Down Idle Hard Disks
 
  • Like
Reactions: Lance Joseph

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,266
428
83
Was gonna say, -n standby in the smartd config file will do that but you've got it pointed out in your doco.

Incidentally, and a bit more on-topic, is putting your discs in spin-down a recommended procedure yet...? I know it'll do wonderers for power use, noise and the like but prior experience with mdadm and spun-down discs resulted in a) multi-second lags waiting for the discs to spin back up and b) lower reliability of the discs as a whole. Am I just being overly cautious...? Do some discs deal with it better than others?
 

rubylaser

Active Member
Jan 4, 2013
842
229
43
Michigan, USA
Was gonna say, -n standby in the smartd config file will do that but you've got it pointed out in your doco.

Incidentally, and a bit more on-topic, is putting your discs in spin-down a recommended procedure yet...? I know it'll do wonderers for power use, noise and the like but prior experience with mdadm and spun-down discs resulted in a) multi-second lags waiting for the discs to spin back up and b) lower reliability of the discs as a whole. Am I just being overly cautious...? Do some discs deal with it better than others?
You will have disks lag when the spin back up if they are spun down. There is no way around that. I don't suggest using spindown with traditional hardware RAID or software RAID (ZFS, mdadm, etc.) as all disks need to be spun back up to do a read.

I do use spindown on my SnapRAID array as only one disk needs to be spun down to do a read. These leaves almost all of my disks spun down most of the time. In terms of longevity, I have a few of what many would consider the some of least reliable disks (Seagate 3TB ST3000DM001) as part of my SnapRAID pool with over 30,000 hours on them. Their SMART data is fine and they spend most of their time in standby. On my home server, the difference between having 24 spindles constantly spinning, is a rather large number in terms of watts.