I asked Insidesystems how they calculate HDD "health level" (their FAQ says all HDDs they sell has a "health level" of at least 80 %...):
Unfortunately I got a non-answer back:
Well, duh! I don't feel like spending energy trying to pry more info from them, and I also noticed they sell the...
Perhaps it's time to wake this thread from the dead now that bcachefs has been accepted in the mainline kernel (since linux-6.7 it seems). Now including working snapshots, compression (gzip/lz4/zstd), writethrough caching, and more!
Anyone tried it? It looks really nice on paper, with its...
I bought these and they arrived quickly and well packaged. Good communication from the seller!
The drives are listed on the Solidigm warranty page when I search for their ISNs, and I could use Solidigm Storage Tool to identify the drives and update firmware. SMART data shows the drives are new...
I serve some samba shares from my NAS. The server and all clients are running Linux (Gentoo, Ubuntu, Debian). Now and again a user loses write permissions to a share: the share mounts okay and the files can be read, but when I try to modify a file or create a new one I get a "Permission denied"...
Interesting, thanks!
That's rather horrible. I would have expected (or at least accepted) that kind of slowdown of writes due to the drive filling up, reducing the available SLC cache. Two orders of magnitude slowdown of reads though ... o_O
That sounds interesting, I'd like to read that! Is...
I guess I am; at least Crucial haven't released any updated firmware for the drive.
Yes, but even if the controller does do a lot, 1) it shouldn't affect the number of known-by-the-drive free blocks over time, and 2) it should have reasonably caused the same amount of slowdown from the start...
Yes but no, huh? :)
For the umpteenth time, I already know that this drive serves me well (so far at least). :) This thread is not about solving a problem (or using multiple of these drives in a pool, or getting back to higher speeds, or...). On the contrary: it's all about understanding the...
For completeness, here's the smart data after the scrub completed. The starting conditions were the same as the earlier data, i.e. no further data was written between taking the previous smart data and starting the scrub. (This scrub took 1:43:51 to finish, BTW.)
Available Spare...
I started a manual scrub and it seems the scrub itself generates a little bit of write activity, which is reflected in the Data Units Written and Host Write Commands fields. So yeah, maybe there's (yet another) bug in zfs in that it doesn't trim blocks freed during a scrub, and the Data Units...
The smart stats I'm referring to are the "Data Units Written", i.e. the number of 512 kB blocks written. Not the host write stats (serial numbers and likely irrelevant lines removed for brevity):
# smartctl -a <path-to-disk>
=== START OF INFORMATION SECTION ===
Model Number...
Hmm, well, that shouldn't be the case here given how the drive's been used, and the 2.52 TB total written reported by smartctl? It's a whole-disk zfs pool used for WORM storage only (no OS, log files or similar). zpool autotrim is on, and it's mounted using noatime. So the 2.52 TB total written...
@T_Minus: You mean you have seen similar behaviour before?
(Again: the slowdown over time is not a problem for me, I just found it interesting. The workload is me writing almost 2 TB to the drive and then reading parts of it back now and again, while also adding a few 10ths of gigabytes of new...
Thanks, pimposh. Yeah, I know the P3 is a bottom-of-the-barrel drive. So I don't really have any expectations on its performance. This phenomenon though - the slowdown of reads with age of the written data - is something I haven't seen mentioned before, and I thought it was interesting. Does it...
November last year I started using a newly built NAS. The main storage drives are:
One 6.4 TB Intel P4610 (Oracle FW...) hooked up via an adapter card in PCIe slot. Used for OS + VM images + remote file storage for desktop computers.
One Crucial P3 4 TB in M.2 slot. Used for WORM storage -...
I didn't realise encryption would slow down IO operations like that; I thought it would mostly affect throughput. Always nice to learn something new. Thanks!
Rather a single sector?
As I understand it, since the recsize is a max value and smaller ZFS blocks are used for storing smaller files regardless of the recsize setting, a lower recsize setting shouldn't make a difference for small files. The use case for smaller recsize is when you have large...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.