I'm experiencing a ZFS disk failure (1) for the first time since I'm working with ZFS. This disk is in a Raid10 pool. The strange thing is that different smartctl reports were given during the last couple of weeks. Sometimes with an error. Sometimes without an error. I always just made short background tests. Maybe a mistake. Then I could easily scrub, clear out ZFS errors while the disk wasn't shown as 'faulty' in ZFS and all was fine. But this time ZFS shows the disk is in a 'faulty' state and the vpool 'degraded'. I just started a final long background check.
Another smartctl report by Proxmox show no errors at all.
If the final smartctl report is negative again and without errors, do you still prefer to replace the disk, or I may have to investigate the HBA as well, which might throw some error? Or what else would you check and try in order to avoid 'false positives' and a disk exchange? Any opinion and/or comment is appreciated.
Thank you in advance.
Mike
Another smartctl report by Proxmox show no errors at all.
If the final smartctl report is negative again and without errors, do you still prefer to replace the disk, or I may have to investigate the HBA as well, which might throw some error? Or what else would you check and try in order to avoid 'false positives' and a disk exchange? Any opinion and/or comment is appreciated.
Thank you in advance.
Mike
Last edited: