I asked Insidesystems how they calculate HDD "health level" (their FAQ says all HDDs they sell has a "health level" of at least 80 %...):
Unfortunately I got a non-answer back:
Well, duh! I don't feel like spending energy trying to pry more info from them, and I also noticed they sell the...
Perhaps it's time to wake this thread from the dead now that bcachefs has been accepted in the mainline kernel (since linux-6.7 it seems). Now including working snapshots, compression (gzip/lz4/zstd), writethrough caching, and more!
Anyone tried it? It looks really nice on paper, with its...
I bought these and they arrived quickly and well packaged. Good communication from the seller!
The drives are listed on the Solidigm warranty page when I search for their ISNs, and I could use Solidigm Storage Tool to identify the drives and update firmware. SMART data shows the drives are new...
I serve some samba shares from my NAS. The server and all clients are running Linux (Gentoo, Ubuntu, Debian). Now and again a user loses write permissions to a share: the share mounts okay and the files can be read, but when I try to modify a file or create a new one I get a "Permission denied"...
Interesting, thanks!
That's rather horrible. I would have expected (or at least accepted) that kind of slowdown of writes due to the drive filling up, reducing the available SLC cache. Two orders of magnitude slowdown of reads though ... o_O
That sounds interesting, I'd like to read that! Is...
I guess I am; at least Crucial haven't released any updated firmware for the drive.
Yes, but even if the controller does do a lot, 1) it shouldn't affect the number of known-by-the-drive free blocks over time, and 2) it should have reasonably caused the same amount of slowdown from the start...
Yes but no, huh? :)
For the umpteenth time, I already know that this drive serves me well (so far at least). :) This thread is not about solving a problem (or using multiple of these drives in a pool, or getting back to higher speeds, or...). On the contrary: it's all about understanding the...
For completeness, here's the smart data after the scrub completed. The starting conditions were the same as the earlier data, i.e. no further data was written between taking the previous smart data and starting the scrub. (This scrub took 1:43:51 to finish, BTW.)
Available Spare...
I started a manual scrub and it seems the scrub itself generates a little bit of write activity, which is reflected in the Data Units Written and Host Write Commands fields. So yeah, maybe there's (yet another) bug in zfs in that it doesn't trim blocks freed during a scrub, and the Data Units...
The smart stats I'm referring to are the "Data Units Written", i.e. the number of 512 kB blocks written. Not the host write stats (serial numbers and likely irrelevant lines removed for brevity):
# smartctl -a <path-to-disk>
=== START OF INFORMATION SECTION ===
Model Number...
Hmm, well, that shouldn't be the case here given how the drive's been used, and the 2.52 TB total written reported by smartctl? It's a whole-disk zfs pool used for WORM storage only (no OS, log files or similar). zpool autotrim is on, and it's mounted using noatime. So the 2.52 TB total written...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.