Seagate Backup Plus 4TB Drive - Cheap 2.5" 4TB drives

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sth

Active Member
Oct 29, 2015
379
91
28
I had similar issues under freebsd/freenas. Performance sucked and I think they were stalling and falling out the array leaving partition or file tables corrupted. Maybe TLER type issues? Under omnios I’ve had no trouble and they are trucking alone fine with significantly better performance FYI.
 

iamtelephone

Member
Jan 30, 2017
49
16
8
Hi, I have 10pcs of ST4000LM016 in a btrfs raid and some of them are failing. Weird thing about it is the disks that fails will remove itself from the controller. Then I usually have to connect the usb adapter the disk came with and run the seatools using the long test and then the disk is ok again. Run a read/write test for a 4-5days and nothing is wrong. Put it back into the server and after a couple of hours it fails again. What gives?
I've had a similar issue with Proxmox/Debian finding errors on a two of my 2.5" 4TB drives. My fix is just as weird. I remove the drive from my server and mount it in my workstation. I'll check dmesg and SMART log (no errors), run a short SMART test (again, no errors), then re-mount it in the server.

For some reason that's all it takes for a supposedly "bad" drive to be fixed (for a few months till the same issue occurs again)... I doubt I even have to run the SMART test.

I've tried diagnosing the problem, but it happens too scarcely to find a proper solution.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
In my experience these drives are not well suited to running arrays of any kind. I went in with both feet getting up to 30+ drives and moved most of my less speed-sensitive storage to them. They are slow, have high failure rates, and really perform poorly under the intense read-modify-write activity associated with rebuilding an array (ZFS resilver). The fact they fail fairly often makes the resilver a more common activity than you'd normally see with other drives - which is just aggravated by the fact that resilver is painful.

I've seen the weird "failed drive recovers after SMART checks" behavior on several occasions. I've also seen drives with more severe error behavior that I decided to wipe prior to tossing it out - only to discover that it worked perfectly after writing all-zeros across all sectors with "dd".

There's just something odd about the "partial SMR" nature of these drives. I speculate - without evidence, just a SWAG - that sustained writes are what drives them crazy.

In any case, I've now moved everything I care about back onto more traditional media (e.g., 3.5" Reds).

If anybody want the 20 or so bare drives that I think are healthy enough to actually still use just let me know - I'll give you a good price. They are likely quite good for their intended purpose of semi-cold archival storage.
 
  • Like
Reactions: CA_Tallguy

theailer

New Member
Feb 27, 2018
2
0
1
42
Yeah, but it's really weird that the disks fail only to work again for some amount of time. I've been replacing them with the 5TB variant and it seems much better.
 

neggles

is 34 Xeons too many?
Sep 2, 2017
62
37
18
Melbourne, AU
omnom.net
Hi, I have 10pcs of ST4000LM016 in a btrfs raid and some of them are failing. Weird thing about it is the disks that fails will remove itself from the controller. Then I usually have to connect the usb adapter the disk came with and run the seatools using the long test and then the disk is ok again. Run a read/write test for a 4-5days and nothing is wrong. Put it back into the server and after a couple of hours it fails again. What gives?
This is pretty simple - do you remember the whole TLER fiasco back when WD Red drives first became a thing? These drives aren't meant for RAID, so when they go into data recovery mode after finding a bad block, it can take them up to 7-10 minutes to complete the recovery cycle - during this period they're effectively non-responsive to the controller, and the controller (correctly) boots them out of the array as a failed drive.

[Edit: Whether you're running a hardware RAID controller, or a software btrfs RAID setup, the result is the same - but you might be able to tweak the timeouts of your btrfs setup to bypass this problem, if you can set the drive timeouts to 5-7 minutes instead of 30-60 seconds]

The drive completes its bad block recovery process, the then long test you run externally allows it an opportunity to remap some more bad blocks - but it's only a matter of time before it runs into another one once you put it back into the array & start rebuilding/scrubbing data.

Short version: this happens because the drive's worn out (too many bad blocks) & it's an indication that you should replace the drives. How old are yours? The 4TB units seem to be much less durable than the 5TB.
 
  • Like
Reactions: CA_Tallguy

Joel

Active Member
Jan 30, 2015
855
194
43
42
Sooo...

I have two of these (Backup Plus 4tbs) that I bought with intentions of making an array out of them plus a few more, but thanks to this thread I've put a stop to that. Anyway, I've been using them as they are intended, and I still don't really like them because of the SMR behavior (SLOOWWWWWWWWWWWW writes).

Does anyone here have experience with the WD alternative? I know they're not shuckable, but any other drawbacks to be aware of?
 

Peanuthead

Active Member
Jun 12, 2015
839
177
43
44
Move up to the 5 TB version and you’ll be good. I only have one 4TB Left that I left in the external closure for a quick back up and transfer purposes
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
So the 5tbs are not SMR?
Well whatever they are they seem to behave much better, I have a few 5tb used just as externals for offsite backup of critical data but not used in raid setup
 

GENTILCO

New Member
Apr 20, 2018
1
0
1
41
Seagate plus fast 4tb portable hard drive the pins broke off the controller were you plug the sata cable in. I'm trying to find a replacement case but I cant find any that accommodate a dual sata raid 0 setup like this. All I really need is a replacement controller. Any help would be awesome thanks.
 

Attachments