HDD tests: full surface scan x data SMART

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mmaretrotech

Member
Jun 22, 2025
35
1
8
I bought my 2.5" HDDs + USB 3.0 case in 2023 in the same year and in 2024 I performed a full surface scan test with seatools and HD Tune and the SMART in CrystalDiskinfo was good. I access these HDDs once a year to monitor the HDD's degradation before it dies and I lose files. Is a full surface test (many hours) necessary every year or is just reading the SMART in CrystalDiskinfo enough?
 

Jelle458

Member
Oct 4, 2022
62
28
18
I'd say reading the smart is enough. I am not sure what a surface scan actually is (I didn't really use the tools you mentioned). I believe smartctl is the better tool.

I have seen drives look just fine in their smart, but then fail the long self-test, so you could run that a few times per year.

Smartctl should have both Windows and Linux variants.
 

Jelle458

Member
Oct 4, 2022
62
28
18

Jelle458

Member
Oct 4, 2022
62
28
18
So the surface and error scan is just a tool to check if each sector is readable.

I would steer away from that, it doesn't make much sense, as the long self-test does exactly that, just in the background with minimal interruption for the general usage of the drive. The long self-test also let's you see problems that are not problems yet, but can be very soon.

The surface scan can also only tell if the OS encounters any problems. This program does not talk to the smart when doing testing, meaning hardware level errors are not even found.

The long self-test is basically the same, except it adds additional testing like mechanical testing, ECC testing, retry testing and remapping testing, where your scan seems like it would catch many things, I would not rely on it in any way and use the smart long self-test. It will also tell you if there is a problem before the OS will know there might be a problem.

The long self-test is also what backblaze is running, they also use smartctl for the job, and they have a quaterly report on failed HDDs:

These small "tools" are a bit annoying for us in the industry, because people take their "scans" at face value. What I see the most often is HD Sentinel with their dumb calculation for "drive health" on an HDD, and people just take that percentage and say "must be above 80%", which can be almost impossible to get with their TWO different algorithms for calculating the percentage health of an HDD.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,385
1,028
113
Stavanger, Norway
intellistream.ai
Checking if the surface is readable is in my opinion not enough, you also need to check latency for each sector read. Some sectors may need a lot of ECC error fixes and can take seconds to read. When this happen to enough sectors and this is for a production system I would strongly considering replacing the hard drive.

Unfortunately there are no tools for this, not even badblocks. I am planning to write my own when I get the time.
 
  • Like
Reactions: nexox and Jelle458

mmaretrotech

Member
Jun 22, 2025
35
1
8
Is opening Crystal DiskInfo and viewing the SMART information enough to determine the health of the drive and whether it's ready to store files? The error test from HD Tune, SeaTools Generic Long Test, or other software takes many hours and causes the drive to overheat.

SMART takes one second. Can I trust this test?
 

Jelle458

Member
Oct 4, 2022
62
28
18
Smart takes one second because it's the short one that only does mechanical tests.

If the drive overheats during tests, you should get some more cooling for it. You will run into it during production as well.

As previously mentioned you need to use smartmontools to start the long self-test, and it will take time to run.
 

mmaretrotech

Member
Jun 22, 2025
35
1
8
What link download smartmontools free with interface for Windows 10? I not drive dvd or USB flash drive

smartmontools test ALL year executable IS necessary?
 

Jelle458

Member
Oct 4, 2022
62
28
18
I did not really understand that. If you are looking for GUI interface you are out of luck. Smartmontools is about as simple as it gets. I posted the link earlier.
 

mmaretrotech

Member
Jun 22, 2025
35
1
8
I have two 2.5" HDDs inside a USB 3.0 case with files. I store these HDDs and access them once a year.

Once a year, what procedure should I follow to check the health and integrity of the HDD and data? Is a scan lasting several hours necessary?

Last year, in 2024, I ran the SeaTools Generic Long Test and the HD Tune Pro Error Scan, but no errors were found.

what format files system you recommend? full format or quick format is necessary?

i have crystal diskinfo
 
Last edited:

bonox

Active Member
Feb 23, 2021
129
48
28
I still can't believe you're flogging this dead horse over and over.

Surface tests won't tell you much about file integrity. It tells you whether the drive is working as expected. You could have corrupted files all over it (written that way, bad cables, bit rot etc etc) and a surface test won't say squat about it. It's probably better than stabbing yourself in the eye with pencils, but you'd once again be better off educating yourself about the benefits of backups and file systems that can not only tell you if the files are intact (any checksumming file system for example - zfs, reFS etc) but if you set it up appropriately, could actually do something about fixing it for you.

TLDR; "How I learned to make many copies of my important files on self checking and repairing filesystems and then stop worrying about it"
 

mmaretrotech

Member
Jun 22, 2025
35
1
8
I use tera copy with verify

I copy files from my PC to my external hard drive with Teracopy, which performs an xxhash3-64 integrity check.

I'd like to have a file with the checksum of these files so I can compare them with the files on my external hard drive next year and see if they're intact or not.
 

Jelle458

Member
Oct 4, 2022
62
28
18
I guess chkdsk could help you out a little bit, but still won't do exactly what you want.

If you only use the drives once per year to avoid bit rot maybe the best is to build a NAS using the ZFS filesystem, then turn it on once per year.

Another way to store data long term would be tape, but I bet that's beyond your scope.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,430
542
113
I'd like to have a file with the checksum of these files so I can compare them with the files on my external hard drive next year and see if they're intact or not.
If we're just talking dumb hard drives/filesystems, you can use something like hashdeep; it'll create a list of file paths and their corresponding hashes. You can then run it again in audit mode to see if any of the file hashes have changed (and as such needs a fair amount of IO and CPU to do its thing). It's useful enough that for the stuff I keep on offline hard drives I'll also keep a hashdeep of the file tree as well so I'll at least know if the backup is good or not should I ever need to recover from it - I've had dying hard drives feed me bad data without reporting read errors before.

Middleware like dm-integrity can provide the same sort of function.

Similarly, checksumming filesystems might be able to detect bitrot, but without redundancy I don't believe any can recover from such errors when there's only a single drive in play; perhaps single bit errors would be recoverable, but I've not had enough experience in this regard.

In the case of dumb storage, back in the day you'd use something like PAR2 parity volumes to provide some level of error-checking and recovery, at the cost of consuming additional space.

You should heed others' warnings though. All of the integrity checking in the world won't help you when the magic smoke gets out. More redundancy, more backups first.
 
Last edited:
  • Like
Reactions: nexox

bonox

Active Member
Feb 23, 2021
129
48
28
Similarly, checksumming filesystems might be able to detect bitrot, but without redundancy I don't believe any can recover from such errors when there's only a single drive in play;
just FYI, ZFS has a "copies" property which will increase the number of copies of any block to a value you specify. In practical terms this means a 6TB disk becomes 3TB with copies=2 and a 2TB disk with copies=3, but it means a single disk can recover from bit errors subject to the mechanics of the disk being sound of course. There may be other file systems that can do this, but ZFS is the only one i'm aware of that will do this automatically (and fix automatically with scrubs) with no user intervention and/or knowledge once the pool has been created.

You'd still be better off with more physical disks though since it covers more failure scenarios.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,430
542
113
Was just checking up on that myself, apparently you can do similar with btrfs by using the dup profile (basically two copies of the same file on the same volume, so same loss of space as the ZFS copies option).

I knew someone in the past who used a two-USB-drives-in-a-RAID1 for their backup drives but whilst it would achieve the requisite redundancy I think we're quickly getting in the realms of Heath Robinson.