Well it appears that the SATA cable was bad, after replacing it the hd is back alive. spent too many hours because of this!!
Now the zfs status is back to ONLINE, but still the same broken output:
Any idea what could be done to mitigate this?
I'm running Fedora35.
I've tried installing new updates, but still the same broken output.
I'm staring to think this troubleshooting does not worth the effort.
perhaps it would be easier to copy the entire data to an external drive, and reconfigure the zfs pool with the 3 hard drives.
in case...
100% sure I've had the 3 disks configured as RAIDZ1.
Notice the screenshot below, SIZE 5.44T for 3 TB HD's.
@Bjorn Smith
Still the same DEGRADED output after 1 hd was physically removed :(
I found this article, which seems to be pretty easy,
https://knowledgebase.45drives.com/kb/kb450412-replacing-drives-in-zfs-pool-on-ubuntu-20-04/
Although as you may probably remember, the Degraded hd was NOT listed while performing zpool status. so I'm not sure if the replace command would work.
My apologies for the late respond - i was badly sick for a few days.
I succeeded in finding the "bad" hard drive, and just bought a new 2TB HD.
What is the correct process in replacing bad hd with a new one in ZFS?
Thanks in advance.
Thanks for your reply.
Unfortunately, no output given,
[root@bugsy tux]# zpool status -vLP
invalid option 'L'
usage:
status [-vx] [-T d|u] [pool] ... [interval [count]]
Hello All,
My zfs pool containing three 2TB disks, all of them about 5 years old. 2 days ago i noticed that 1 is missing when the pc has booted.
While checking the zfs status, i noticed it is DEGRADED. although i was unable to receive elaborated details regarding it.
[root@bugsy tux]# zpool...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.