Seagate Backup Plus 4TB Drive - Cheap 2.5" 4TB drives

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sth

Active Member
Oct 29, 2015
379
91
28
They aren't technically laptop drives though being 15mm thick and theres enterprise 2.5" disks which must perform better. Both my Reds and these are 5400rpm drives, speed wise they are slower than the 4TB Reds delivering between 50&120MB/s depending on location on the disk surface, versus my reds which are more in the region of 60-130 or so. The main issue is a single drive doesn't scale beyond a single request so regardless of RAIDZ2/mirrored use they are going to suck for anything which serves multiple users, i.e a plex streamer used my more than one user. I'll test it to confirm later but I'd expect a 24 drive stripe to "only" provide 24 drives x 10MB/s performance, i.e 240MB/s which is fine if you're using 1gbe networking, but for those with 10gbe sucks.
 

KioskAdmin

Active Member
Jan 20, 2015
156
32
28
53
@sth I've always thought these are more like the media serving, low end NAS not something you'd run VMs on and hit 24x7.
 

sth

Active Member
Oct 29, 2015
379
91
28
Im not talking VMs, far from it. I was looking for an array of space and energy efficient, "somewhat" performance drives to serve media to three-four users. The way the perf tanks with simultaneous reads (and writes) suggests limited cache/non functional NCQ (or possibly SMR rather than PMR?) storage. Either way, Im curious what performance other users are seeing to eliminate my setup or fix my limited knowledge.

EDIT: Id be specifically interested in anyone using these under a OS other than FreeNAS. I haven't had time to check under a plain Ubuntu or anything although thats next.
 
Last edited:

sth

Active Member
Oct 29, 2015
379
91
28
Tested this same drive on a Ubuntu bootable USB image and benchmarked at 120MB/s single 'dd', and 32MB/s dual 'dd' so 3x increase over when connected to FreeNAS, is this possibly a driver issue?
 

Attachments

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Tested this same drive on a Ubuntu bootable USB image and benchmarked at 120MB/s single 'dd', and 32MB/s dual 'dd' so 3x increase over when connected to FreeNAS, is this possibly a driver issue?
I know you said you are after more linear single drive numbers (non-FreeNAS) but just as a baseline this is a 6 disk raidz2 pool w/ a hussl as a SLOG device. You can clearly see the pool of 6 devices working in parallel push about GigE speeds and the slog another 100MBps or so equating to the nearly 2Gbps throughput to the pool. This is a stg intense sVMotion of a 50GB fio tester VM so about as 'real-world' as it gets and yes it does still run pretty darn well but I wouldn't run all 50+ of my VM's off of it :-D

upload_2016-10-25_19-44-34.png
 

sth

Active Member
Oct 29, 2015
379
91
28
thanks Whitey. Here's mine in final array config writing some .ISO images.
I suspect Ill be swapping these out for more performant 3.5" drives soon.

3x8rz2.png
 

ridney

Member
Dec 8, 2015
77
33
18
Singapore
I have the same drives in a 6 drive raid 6 array under xpenology and i am having no problem streaming multiple plex users. i can test if you're interested
 

sth

Active Member
Oct 29, 2015
379
91
28
would appreciate it - drop me a PM if you need any help is running commands etc.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
thanks Whitey. Here's mine in final array config writing some .ISO images.
I suspect Ill be swapping these out for more performant 3.5" drives soon.

View attachment 3699
Yeah those numbers are pretty piss poor I can see your qualms...I cannot speak to 'writing .iso files to it' comment though...locally, across network share, what protocol if across network? No read/write cache I can see, I'm a sucker for those per-pool. 24 drives in a 3 stripe raidz2 should do a bit better than that i would think, I know that's pretty open to interpretation but if it's not meeting your needs try a different config maybe w/ cache devices or stripped-mirrors (raid-10 in ZFS) just to see what the heck is going on, your don't have too much on your pool now (100GB or so right) so not a real big deal IMHO. What chassis/backplane/sas HBA again? Will go re-read thread origins.

EDIT: Don't see that info, even at 65MBps your about 1/2 of a GigE conn so if your stuck at that not a deal breaker but you wanna push those speeds of course hence my recommendation to try more performant pool layouts potentially.
 
Last edited:

sth

Active Member
Oct 29, 2015
379
91
28
Here you go - added a P3700 400GB slog under provisioned down to 16GB and a 800GB P3700 l2arc - probably amongst the best SLOG & ZIL you can get.

withl2andslog.png
They are installed in a SC216 with a 'A' backplane, so no multiplexing going on etc.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Here you go - added a P3700 400GB slog under provisioned down to 16GB and a 800GB P3700 l2arc - probably amongst the best SLOG & ZIL you can get.

View attachment 3703
They are installed in a SC216 with a 'A' backplane, so no multiplexing going on etc.
Wow, not a whole lot better, above GigE speeds though now, what is your network running at or limited to, if it's GigE then I'd save those fancy NVMe drives for something else and slap a S3700/hussl in as SLOG and s3610 in as L2ARC. Same backplane as I have my 6 drives of this model in, I use a 100GB hussl for both SLOG/L2ARC because that's what I had :-D

EDIT: You're slammin' data into is 'somehow' at a reasonable rate as I see your data usage gone up from 112GB to 2.3TB per raidz2 vdev from your earlier picture to this one. What are you using/protocol to dump data into it?
 
Last edited:

sth

Active Member
Oct 29, 2015
379
91
28
well, not quite. The aggregate number written to the disk array I believe is the sum of the RaidZ2s, so circa 60MB/s. The total written to the Volume includes logs which aren't really data so to speak so it offsets the number somewhat.
I saw better perf under Linux so maybe theres hope of a driver improvement somewhere but yeah, these are likely going to be used for backup purposes only so the Ps3700's will be coming out and going on a 24 * 4TB 3.5" array!

Im dumping a bunch of movie rips to this array using rsync from my existing 10 * 3.5" server. I was hoping have duplicated this data at closer to 500Mb/s so it should have been done by now :)
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
We could debate that for sure but I digress (flush v.s. actual throughput/disk v.s. network rate)...I know when I do a sVMotion the write cache is certainly accounted for in the cacti/freenas graphs.

Very odd, wonder why your only seeing a few MBps writing to them, mine aren't anything to write home about 20-30MBps but 10x's better is quite the disparity in a similar raidz2 config (minus 2 drives for me plus you 'should' get the stripe/vdev benefit as well right, looks like a raid60 ZFS config to me unless my eyes are crossing). Did you set up the first raidz2 pool TANK then zpool add the other two raidz2 vdev's?
 

sth

Active Member
Oct 29, 2015
379
91
28
I used the FreeNAS create volume tool to create the 3 RAIDZ2 stripes, log and cache simultaneously....like this....

volumemgr.png
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Cool. Just curious and a silly ? since I know the chassis, those are PCIe NVMe drives right not the U.2 drive types? Good to see freenas detect those natively.
 

sth

Active Member
Oct 29, 2015
379
91
28
yup....161101-216.jpg

Chelsio 520-CR, 3 * 9207-8i, P3700 800GB cache, P3700 400GB log.

Not what I was intending to use them for originally but they were lying around so why not!? :) Those P3700's probably cost more than all the Seagates!:)
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
SNIFF...that's soo beautiful, mine has an Intel X520-DA2 and the first three (LSI 2008 cards) but sadly missing the last two :-D HAHA

Yeah that's super depressing performance when you compare the HW you have thrown at it :-(

How about local when you are on the pool? dd test/fio/hell I dunno what else /thekitchensink we can throw at it.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
People who have P3700's "just lying around" make me sad. Send them to me - I have good homes for them.
 
  • Like
Reactions: Rain

sth

Active Member
Oct 29, 2015
379
91
28
The local side 'dd' is above, with anything other than a single read it plummets to this level of perf. The network isn't the limiting factor sadly... Ive eliminated HBAs and backplane too. Its definitely the disks themselves.