FreeNas Pools Healthy, but errors on disk?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

methos

New Member
Dec 19, 2013
20
0
1
Canton, OH
I get this when logging into FreeNas below, however, everything "appears" healthy.

Build FreeNAS-9.3-STABLE-201508250051
Hardware: SuperMicro,

  • CRITICAL: Device: /dev/ada7, 10 Currently unreadable (pending) sectors
  • CRITICAL: Device: /dev/ada7, 10 Offline uncorrectable sectors
  • CRITICAL: Device: /dev/ada0, 1 Offline uncorrectable sectors
  • OK: There is a new update available! Apply it in System -> Update tab.
  • CRITICAL: The capacity for the volume 'xtank' is currently at 91%, while the recommended value is below 80%.

[root@san02] ~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
USBMOB 2.08T 2.31T 2.08T /mnt/USBMOB
freenas-boot 800M 6.42G 31K none
freenas-boot/ROOT 785M 6.42G 25K none
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201509022158 771M 6.42G 524M /
freenas-boot/ROOT/Initial-Install 1K 6.42G 510M legacy
freenas-boot/ROOT/default 13.8M 6.42G 524M legacy
freenas-boot/grub 13.6M 6.42G 6.79M legacy
xtank 3.30T 212G 49.5K /mnt/xtank
xtank/ahsay 3.19T 212G 2.72T /mnt/xtank/ahsay
xtank/ftp 114G 212G 114G /mnt/xtank/ftp
ztank 1.31T 1.32T 71.8K /mnt/ztank
ztank/.system 205M 1.32T 166M legacy
ztank/.system/configs-99e4bfc46fa746999fbe43521aa4e3fd 43.4K 1.32T 43.4K legacy
ztank/.system/cores 31.8M 1.32T 31.8M legacy
ztank/.system/rrd-99e4bfc46fa746999fbe43521aa4e3fd 43.4K 1.32T 43.4K legacy
ztank/.system/samba4 4.72M 1.32T 4.72M legacy
ztank/.system/syslog-99e4bfc46fa746999fbe43521aa4e3fd 2.61M 1.32T 2.61M legacy
ztank/dump 378M 1.32T 378M /mnt/ztank/dump
ztank/jails 25.4K 1.32T 25.4K /mnt/ztank/jails
ztank/sp_images 148M 1.32T 148M /mnt/ztank/sp_images
ztank/test 1.31T 2.03T 611G -
[root@san02] ~#





[root@san02] ~# zpool status
pool: USBMOB
state: ONLINE
scan: scrub repaired 0 in 18h49m with 0 errors on Sun Oct 11 18:49:49 2015
config:

NAME STATE READ WRITE CKSUM
USBMOB ONLINE 0 0 0
gptid/40ced978-5024-11e5-a665-003048c3cf3e ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Fri Oct 2 03:46:52 2015
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors

pool: xtank
state: ONLINE
scan: scrub repaired 0 in 14h9m with 0 errors on Sun Sep 27 14:09:33 2015
config:

NAME STATE READ WRITE CKSUM
xtank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/a2761390-5773-11e2-bf5d-003048c3cf3e ONLINE 0 0 0
gptid/a2daf91a-5773-11e2-bf5d-003048c3cf3e ONLINE 0 0 0
gptid/a34308c4-5773-11e2-bf5d-003048c3cf3e ONLINE 0 0 0
gptid/a3aabc7b-5773-11e2-bf5d-003048c3cf3e ONLINE 0 0 0
gptid/e141fdfd-b6c0-11e4-a51a-003048c3cf3e ONLINE 0 0 0

errors: No known data errors

pool: ztank
state: ONLINE
scan: scrub repaired 0 in 5h49m with 0 errors on Sun Oct 4 02:49:33 2015
config:

NAME STATE READ WRITE CKSUM
ztank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/e660e8e7-536d-11e2-be08-003048c3cf3e ONLINE 0 0 0
gptid/2654eda6-4d5a-11e2-be08-003048c3cf3e ONLINE 0 0 0
gptid/26ab7aa8-4d5a-11e2-be08-003048c3cf3e ONLINE 0 0 0
gptid/2703e5c2-4d5a-11e2-be08-003048c3cf3e ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gptid/2763984d-4d5a-11e2-be08-003048c3cf3e ONLINE 0 0 0
gptid/27bab769-4d5a-11e2-be08-003048c3cf3e ONLINE 0 0 0

errors: No known data errors
[root@san02] ~#
[root@san02] ~#
 

methos

New Member
Dec 19, 2013
20
0
1
Canton, OH
Check SMART data via smartctl / smartmontools

The reporting bad HD is as follows per smartctl; *shrug* looks good?



[root@san02] ~# smartctl -a /dev/ada7
smartctl 6.3 2014-07-26 r3976 [FreeBSD 9.3-RELEASE-p23 amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, smartmontools

=== START OF INFORMATION SECTION ===
Model Family: Western Digital RE4
Device Model: WDC WD1003FBYX-01Y7B1
Serial Number: WD-WCAW34085596
LU WWN Device Id: 5 0014ee 2074b690d
Firmware Version: 01.01V02
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: 7200 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 3.0, 3.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Wed Oct 21 19:53:02 2015 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
 

xnoodle

Active Member
Jan 4, 2011
258
48
28
SMART is passing, but the critical message was for two different attributes . Look for the "Currently unreadable (pending) sectors" and "Offline uncorrectable sectors" count. If those continue to increase you may want to check your cabling and/or RMA the drive.
 
  • Like
Reactions: Terry Kennedy

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Not a FreeNAS user, but does it automatically schedule SMART self-tests? Most distros I've run into don't enable them by default and I don't know if FreeNAS is any different. Have you tried running a short one against the offending drive manually (if other FreeNAS people say it's doable)?

The FreeNAS doc I read here seems to suggest they're not on by default since it talks about setting up a cron for them.
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
SMART is passing, but the critical message was for two different attributes . Look for the "Currently unreadable (pending) sectors" and "Offline uncorrectable sectors" count. If those continue to increase you may want to check your cabling and/or RMA the drive.
I'd replace them proactively. Things may not get worse (although in my experience they will) but they definitely won't get better. I've never had a manufacturer balk at a replacement for even a single sector. If these are used and/or OEM drives, there may not be a warranty on them. If the original poster bought them all at the same time from the same seller, they may have been mishandled at some point and even if they haven't, if they're all the same age and model another drive may start acting up. The last thing you want is to have another drive fail while doing a ZFS resilver / RAID rebuild.