Search results

  1. H

    What is the condition of used HGST HUH721010ALE601 10 TB Helium drives sold on eBay?

    I would be interested in more info about the state of the drives once you get them!
  2. H

    What is the condition of used HGST HUH721010ALE601 10 TB Helium drives sold on eBay?

    I asked Insidesystems how they calculate HDD "health level" (their FAQ says all HDDs they sell has a "health level" of at least 80 %...): Unfortunately I got a non-answer back: Well, duh! I don't feel like spending energy trying to pry more info from them, and I also noticed they sell the...
  3. H

    Has anyone tried bcachefs yet?

    One major feature that's missing is support for send/receive. It's on the roadmap but obviously that doesn't help today.
  4. H

    Has anyone tried bcachefs yet?

    Perhaps it's time to wake this thread from the dead now that bcachefs has been accepted in the mainline kernel (since linux-6.7 it seems). Now including working snapshots, compression (gzip/lz4/zstd), writethrough caching, and more! Anyone tried it? It looks really nice on paper, with its...
  5. H

    EU [FS][EU-IT] Lot of 2 Intel SATA SSD D3-S4510 1.92TB NEW (factory sealed)

    I bought these and they arrived quickly and well packaged. Good communication from the seller! The drives are listed on the Solidigm warranty page when I search for their ISNs, and I could use Solidigm Storage Tool to identify the drives and update firmware. SMART data shows the drives are new...
  6. H

    "net cache flush" needed with samba to restore write permissions - why?

    I serve some samba shares from my NAS. The server and all clients are running Linux (Gentoo, Ubuntu, Debian). Now and again a user loses write permissions to a share: the share mounts okay and the files can be read, but when I try to modify a file or create a new one I get a "Permission denied"...
  7. H

    Inside an Intel DC S3500 240 GB

    Out of curiosity I opened up one of these drives and took some photos, and thought it could be fun to share. Enjoy!
  8. H

    EU [FS] Supermicro A2SDI-2C-HLN4F in CSE-101F - 2 Units

    What kind of memory is in these kits? UDIMM/RDIMM, ECC or not?
  9. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    Interesting, thanks! That's rather horrible. I would have expected (or at least accepted) that kind of slowdown of writes due to the drive filling up, reducing the available SLC cache. Two orders of magnitude slowdown of reads though ... o_O That sounds interesting, I'd like to read that! Is...
  10. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    I guess I am; at least Crucial haven't released any updated firmware for the drive. Yes, but even if the controller does do a lot, 1) it shouldn't affect the number of known-by-the-drive free blocks over time, and 2) it should have reasonably caused the same amount of slowdown from the start...
  11. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    Yes but no, huh? :) For the umpteenth time, I already know that this drive serves me well (so far at least). :) This thread is not about solving a problem (or using multiple of these drives in a pool, or getting back to higher speeds, or...). On the contrary: it's all about understanding the...
  12. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    For completeness, here's the smart data after the scrub completed. The starting conditions were the same as the earlier data, i.e. no further data was written between taking the previous smart data and starting the scrub. (This scrub took 1:43:51 to finish, BTW.) Available Spare...
  13. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    I started a manual scrub and it seems the scrub itself generates a little bit of write activity, which is reflected in the Data Units Written and Host Write Commands fields. So yeah, maybe there's (yet another) bug in zfs in that it doesn't trim blocks freed during a scrub, and the Data Units...
  14. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    The smart stats I'm referring to are the "Data Units Written", i.e. the number of 512 kB blocks written. Not the host write stats (serial numbers and likely irrelevant lines removed for brevity): # smartctl -a <path-to-disk> === START OF INFORMATION SECTION === Model Number...
  15. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    Hmm, well, that shouldn't be the case here given how the drive's been used, and the 2.52 TB total written reported by smartctl? It's a whole-disk zfs pool used for WORM storage only (no OS, log files or similar). zpool autotrim is on, and it's mounted using noatime. So the 2.52 TB total written...
  16. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    @T_Minus: You mean you have seen similar behaviour before? (Again: the slowdown over time is not a problem for me, I just found it interesting. The workload is me writing almost 2 TB to the drive and then reading parts of it back now and again, while also adding a few 10ths of gigabytes of new...
  17. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    Thanks, pimposh. Yeah, I know the P3 is a bottom-of-the-barrel drive. So I don't really have any expectations on its performance. This phenomenon though - the slowdown of reads with age of the written data - is something I haven't seen mentioned before, and I thought it was interesting. Does it...
  18. H

    Slowdown over time of Crucial P3 SSD (vs Intel P4610)

    November last year I started using a newly built NAS. The main storage drives are: One 6.4 TB Intel P4610 (Oracle FW...) hooked up via an adapter card in PCIe slot. Used for OS + VM images + remote file storage for desktop computers. One Crucial P3 4 TB in M.2 slot. Used for WORM storage -...
  19. H

    ZFS Elephant In The Room (all NVMe array)

    I didn't realise encryption would slow down IO operations like that; I thought it would mostly affect throughput. Always nice to learn something new. Thanks!
  20. H

    ZFS Elephant In The Room (all NVMe array)

    Rather a single sector? As I understand it, since the recsize is a max value and smaller ZFS blocks are used for storing smaller files regardless of the recsize setting, a lower recsize setting shouldn't make a difference for small files. The use case for smaller recsize is when you have large...