Seagate Backup Plus 4TB Drive - Cheap 2.5" 4TB drives

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
I'm awaiting delivery of a SM CSE-219A-R920UB chassis, gonna be loading it with the 5TB ST5000LM000. So far bought only 1 drive for testing. I shucked it from externals. Haven't found any commands that will report the actual cache size on the disk. Does anyone have a freebsd command to print such info about a disk?

Also waiting eagerly on perf reports from other users here on the ST5000LM000 drives in a RAID setup.
Some single disk tests, performance in the lower stripe sizes seem unstable and vary from time to time:

View attachment 7001 View attachment 7002 View attachment 7003
Not sure on the commands but the data sheet shows 128Mb, beyond that crystal disk info usually reports it.
 

b3nno

New Member
Jul 17, 2017
12
2
3
35
Yeah I know about the datasheets, but was curious if the shucked disk was somehow different from the drives that arent.
I tried crystal disk info, it does not report cache size like it does with my other disks.
 

casperghst42

Member
Sep 14, 2015
112
20
18
55
I was at a shop yesterday where they had Seagate STEF4000400's at €111 a piece (European number), which is reasonable here in Germany, does anyone know what drivers are in these boxes - I can't find anything on them?
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
I was at a shop yesterday where they had Seagate STEF4000400's at €111 a piece (European number), which is reasonable here in Germany, does anyone know what drivers are in these boxes - I can't find anything on them?
https://www.google.com/url?sa=t&sou...EAioQFghzMAA&usg=AOvVaw20K5HeZ4NFVR7khlU_ZREd

It looks like they're a emea version of the expansion plus, it should be the same 4tb 2.5 inch drive as the backup plus from what I can see but without opening one I can't be 100% sure
 

casperghst42

Member
Sep 14, 2015
112
20
18
55

b3nno

New Member
Jul 17, 2017
12
2
3
35
Would appreciate it if you post your findings Casper :) I've also come across some of those, pretty cheap.
 

casperghst42

Member
Sep 14, 2015
112
20
18
55
Would appreciate it if you post your findings Casper :) I've also come across some of those, pretty cheap.
I ended up getting some 3.5" 4TB WD Red's instead, as they where almost the same price. But I'll pick one up shortly and report back - can't say when, but hopefully before Christmas.

Cheers,
Casper
 

b3nno

New Member
Jul 17, 2017
12
2
3
35
I bought a STEF4000400 disk. Contained ST4000LM024, the newer 4TB disk with SMR, MTC etc.
The chassis arrived:
20171114_125842.jpg
Chop chop:
20171114_210440.jpg
20171115_000032.jpg
 
  • Like
Reactions: K D

anoother

Member
Dec 2, 2016
133
22
18
34
@b3nno: Very nice, but how are you going to power it now?

I recently replaced my 4TBs (3 M016s but there might have been an M024 amongst them) with 7200rpm 3.5" disks; was getting sick of the poor performance.

Pretty convinced these all must be SMR drives. My mum is using a brand new unshucked one now for backups on Windows; even there it looks like it's for to take ~18hrs for a 1TB copy.

Looks like the optimizations made in the 5TB drives has made a good difference; I wonder if it applies to the new 4TBs, too.

I'm going to try some experiments with SMR-optimized FS/options when I get a chance, see if any more performance can be extracted from my drives.
 

sth

Active Member
Oct 29, 2015
379
91
28
This doesn’t make sense. There’s benchmarks throughout this thread showing the 4TB is capable of 60-100MB/s writes across the disk surface. If your seeing circa 15MB/s (1TB in 18hrs) you have another issue rather than just simply SMR. Is she connecting over USB2?
 

anoother

Member
Dec 2, 2016
133
22
18
34
USB3. I've seen similar performance in ZFS RAID1, with a combination of shucked and original bare drives. Resilvers, with ~1TB free, took around 3+ days iirc.

For me these drives have consistently demonstrated the typical behaviour you read about with SMR disks: Initial fast writes until the cache is full, and then crippling slow performance, exacerbated further on non-sequential workloads. And decent sequential reads.

There could have been other things about my setup that were causing the issues - eg. vibration (using CSE-M28SAB enclosure), or maybe a single bad drive (being a 2- and then 3-drive RAID 1, I'll have been limited by the slowest disk). On which note, I had to RMA one drive -- The only one I bought bare as opposed to shucked from an external.

The poor NTFS performance, again, could be explained by eg. the delta copying mechanism used by the backup tool (bvckup2).

I could well be wrong, but the above experience says to me that these are prototypical SMR drives.

I will give them another chance, though, and as noted, will be seeing if there are any ways to squeeze some more performance out of them.
 

anoother

Member
Dec 2, 2016
133
22
18
34
Hmm, skimming back across this thread, I just can't relate to the performance seen by others.

And yes, I've always had min_ashift=12. Always had plenty of RAM (48 then 64GB) and both ZIL and L2ARC on SLC SSDs.

Also changed disk controllers & entire base system along this journey. The only constants are the drives and the enclosure; and the latter had no issue allowing the SSDs installed alongside these Seagates to reach their potential.

All my issues went away with 'normal' disks. Resilver for 3TB down to <16 hrs (from 72+).

EDIT: I have been using ZFS compression and dedup, the latter unlikely to be something SMR drives like, due to it being metadata-heavy and, I expect, reliant on lots of random IO. My dedupe ratio has floated between 1.4 and 1.8.
 
Last edited:

b3nno

New Member
Jul 17, 2017
12
2
3
35
@b3nno: Very nice, but how are you going to power it now?
I've put a small power supply from an old Shuttle PC in the 5,25" bay. Shorted the pwr_on to ground, and removed unnecessary cabling.
Reason I cut the thing in half is I don't have room for a full depth server, I only have some rackspace with about 19" of depth.

I've been testing some with these new Seagate disks while waiting for more disks to arrive. I put two ST4000LM024 and an ST5000lm000 into a raidZ1 volume. Performance seems OK, I've written it to 82% capacity with compressed media. I'm accessing the volume via CIFS. In the beginning it was writing at 110-130 MB/s. At 82% full the volume writes at about 90-100 MB/s.

I tried yanking one of the ST4000LM024's while writing to the pool, and the server behaved nicely and it wen't along without a hiccup.
Wiped the disk and starter resilvering. It finished in 22 hours and 41 minutes. Some of the progress:

scan: resilver in progress since Sun Dec 10 16:41:10 2017
180G scanned out of 8.62T at 223M/s, 11h2m to go
60.0G resilvered, 2.04% done

scan: resilver in progress since Sun Dec 10 16:41:10 2017
728G scanned out of 8.62T at 155M/s, 14h52m to go
243G resilvered, 8.25% done

scan: resilvered 2.87T in 22h41m with 0 errors on Mon Dec 11 15:23:03 2017

Wondering what more performance tests I can use to test the pool. I tried dd with zeroes, but with lz4 compression the tests are useless, it wrote 20GB in a couple of seconds..
I used this commando:
dd if=/dev/zero of=/mnt/storage_vol/dataset_main/ddfile bs=1024k count=20000
 

Churchill

Admiral
Jan 6, 2016
838
213
43
I could well be wrong, but the above experience says to me that these are prototypical SMR drives.

I will give them another chance, though, and as noted, will be seeing if there are any ways to squeeze some more performance out of them.
If you figure out a use for them let me know. I've got a stack of 'em that aren't doing diddly dick and loathe putting them back in my server.
 

b3nno

New Member
Jul 17, 2017
12
2
3
35
Scrub is going along nicely aswell:

capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
test 8.65T 2.22T 1.88K 20 238M 84.0K
raidz1 8.65T 2.22T 1.88K 20 238M 84.0K
gptid/d478528b-dcfd-11e7-b42c-000c29d77dc6.eli - - 1.14K 3 119M 60.8K
gptid/d595fb9a-dcfd-11e7-b42c-000c29d77dc6.eli - - 1.16K 4 119M 60.0K
gptid/82fb3cab-ddc0-11e7-9e76-000c29d77dc6.eli - - 1.19K 4 119M 56.8K
-------------------------------------- ----- ----- ----- ----- ----- -----


root@freenas:~ # zpool status test
pool: test
state: ONLINE
scan: scrub in progress since Mon Dec 11 21:46:11 2017
203G scanned out of 8.65T at 355M/s, 6h56m to go
0 repaired, 2.29% done
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/d478528b-dcfd-11e7-b42c-000c29d77dc6.eli ONLINE 0 0 0
gptid/d595fb9a-dcfd-11e7-b42c-000c29d77dc6.eli ONLINE 0 0 0
gptid/82fb3cab-ddc0-11e7-9e76-000c29d77dc6.eli ONLINE 0 0 0

errors: No known data errors
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Well fark...lookin' like one of my 6 is giving up the ghost a year in or so. Saw this today. (Don't laugh at me for not seeing this a month into this debacle haha)

upload_2017-12-30_16-57-11.png

Tried reseating the drive and resilver commenced. Issue came back very soon after, tried 'zpool clear' a few times now and a manual 'zpool scrub'...it has bombed out a time or two since.

LAME...anyone wanna unload a spare they have laying around that they dont trust for cheap to me to help me out? I am configured in raidz2 across 6 of these POS disks but I would like to repair/replace disk soon and think about 'plan B'.

EDIT: I don't even have the space to back this up currently. Hell I suck!

[root@freenas-esxi6a] /mnt/datazilla/movies# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
datazilla 21.8T 17.2T 4.55T - 25% 79% 1.00x ONLINE /mnt

Last chance at yet another scrub. So far she's cookin' but I am not holding onto too much hope that there is not something seriously wrong/sick w/ that disk.

[root@freenas-esxi6a] /mnt/datazilla/movies# zpool status
pool: datazilla
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: ZFS-8000-9P
scan: scrub in progress since Sat Dec 30 16:50:40 2017
239G scanned out of 17.2T at 378M/s, 13h5m to go
11.6M repaired, 1.36% done
config:

NAME STATE READ WRITE CKSUM
datazilla ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/684d888f-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 0
gptid/699161ef-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 0
gptid/6ade9b2f-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 0
gptid/6c2112df-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 2.98K (repairing)
gptid/6d64c12f-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 0
gptid/6e6dc39f-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 0
logs
gptid/6ea70c28-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 0
cache
gptid/6ed6a4de-3f15-11e6-800d-0050569a0ed7 ONLINE 0 0 0

errors: No known data errors
 
Last edited:

tom302

New Member
Jan 5, 2018
1
0
1
45
My requirements have changed.

Anyone want a deal on 6 unopened retail boxes of the STDR4000100? $500 + shipping for all six.
 

theailer

New Member
Feb 27, 2018
2
0
1
42
Hi, I have 10pcs of ST4000LM016 in a btrfs raid and some of them are failing. Weird thing about it is the disks that fails will remove itself from the controller. Then I usually have to connect the usb adapter the disk came with and run the seatools using the long test and then the disk is ok again. Run a read/write test for a 4-5days and nothing is wrong. Put it back into the server and after a couple of hours it fails again. What gives?