Seagate Backup Plus 4TB Drive - Cheap 2.5" 4TB drives

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
For kicks show us a single P3700 under freenas across 10G NFS/iSCSI (w/ src and dest capable of max performance) hah with something like a sVMotion or your .iso generation process, does that FLY?
 

sth

Active Member
Oct 29, 2015
379
91
28
I'll get to that.... in the meanwhile... heres the output of my movie folder copy.

Code:
sent 6.74T bytes  received 34.92K bytes  79.08M bytes/sec
and heres the scrub I'm running currently... note higher read speed...

scrub.png
 

Rain

Active Member
May 13, 2013
276
124
43
@sth Are you sure FreeNAS isn't creating the volume with ashift=9 instead of ashift=12, as I'm sure these are 4K physical (512 emulated) disks? Might explain the slow write speeds. ZFS avoids smaller writes when possible, so while using ashift=12 might help, it probably wont eliminate the issue entirely.

I'd try throwing Linux on that machine, install ZFSonLinux, and try the same exact configuration just for kicks. See if it preforms better. (Don't import the pool, though, re-create it)
 
Last edited:

fractal

Active Member
Jun 7, 2016
309
69
28
33
This is the first time I have ever seen someone suggest that ZFS on Linux might be faster than on FreeBSD or Solaris. That's new.

But the suggestion to open a shell on your FreeNAS box and check "zdb | grep ashift" is a good idea.
 

Rain

Active Member
May 13, 2013
276
124
43
This is the first time I have ever seen someone suggest that ZFS on Linux might be faster than on FreeBSD or Solaris. That's new.
I just don't like FreeNAS :p (Hopefully FreeNAS 10 will change my opinion). In all seriousness, I believe ZoL is more aggressive at tuning & enabling features for performance by default. ZoL is more likely to select ashift=12 by default at the cost of space efficiency, for example. ZoL also supports the large_blocks feature flag, FreeNAS (being based on FreeBSD 9) doesn't as far as I know. Nvm, aparently FreeNAS 9.3 added this? I doubt it's enabled by default in FreeNAS 9.x, though; you probably have to force it at pool-creation.

But the suggestion to open a shell on your FreeNAS box and check "zdb | grep ashift" is a good idea.
Thinking about it a bit more, consider that with a 3 x 8xRAIDZ2, each "block" of data is being split three times (once per vdev) and then again six times (once per "data disk"). Compared to, say, a single 6xRAIDZ2, this amplifies the chance of ZFS writing a "small" block (smaller than 4K) to the disks even if your write workload is mostly "large" blocks. Reading would be less affected, as @sth seems to be seeing.

I definitely wouldn't be surprised if ashift=9 vs ashift=12 ends up being this issue here. I wish I had 24 disks laying around to test, because now I'm curious as well.
 
Last edited:

sth

Active Member
Oct 29, 2015
379
91
28
Thanks for the help guys, heres the full output of the zdb command... looks like ashift=12 though.

Code:
zdb -U /data/zfs/zpool.cache
TANK:
    version: 5000
    name: 'TANK'
    state: 0
    txg: 40239
    pool_guid: 16507166921472422085
    hostid: 1427528986
    hostname: 'freenas826.local.lan'
    vdev_children: 4
    vdev_tree:
        type: 'root'
        id: 0
        guid: 16507166921472422085
        create_txg: 4
        children[0]:
            type: 'raidz'
            id: 0
            guid: 17901899365956536076
            nparity: 2
            metaslab_array: 41
            metaslab_shift: 38
            ashift: 12
            asize: 31989077901312
            is_log: 0
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 16683411425074944316
                path: '/dev/gptid/b0103130-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 129
                create_txg: 4
            children[1]:
                type: 'disk'
                id: 1
                guid: 3808633636220662989
                path: '/dev/gptid/b1858301-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 128
                create_txg: 4
            children[2]:
                type: 'disk'
                id: 2
                guid: 6865155263484773987
                path: '/dev/gptid/b30755c8-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 127
                create_txg: 4
            children[3]:
                type: 'disk'
                id: 3
                guid: 3129195038383739140
                path: '/dev/gptid/b4656e96-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 126
                create_txg: 4
            children[4]:
                type: 'disk'
                id: 4
                guid: 16851594806130473045
                path: '/dev/gptid/b5e124f4-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 125
                create_txg: 4
            children[5]:
                type: 'disk'
                id: 5
                guid: 8637982848107491375
                path: '/dev/gptid/b78fe5a3-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 124
                create_txg: 4
            children[6]:
                type: 'disk'
                id: 6
                guid: 10625254421541289661
                path: '/dev/gptid/b91595e3-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 123
                create_txg: 4
            children[7]:
                type: 'disk'
                id: 7
                guid: 4531785168878392080
                path: '/dev/gptid/ba27b19f-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 122
                create_txg: 4
        children[1]:
            type: 'raidz'
            id: 1
            guid: 5080360278934488425
            nparity: 2
            metaslab_array: 39
            metaslab_shift: 38
            ashift: 12
            asize: 31989077901312
            is_log: 0
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 16246573185704867732
                path: '/dev/gptid/bb54b1e9-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 137
                create_txg: 4
            children[1]:
                type: 'disk'
                id: 1
                guid: 11680999326859724467
                path: '/dev/gptid/bc5c2efa-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 136
                create_txg: 4
            children[2]:
                type: 'disk'
                id: 2
                guid: 18153370949237993608
                path: '/dev/gptid/bdc3c421-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 135
                create_txg: 4
            children[3]:
                type: 'disk'
                id: 3
                guid: 15428781047789136941
                path: '/dev/gptid/bee515ab-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 134
                create_txg: 4
            children[4]:
                type: 'disk'
                id: 4
                guid: 12740329624770822338
                path: '/dev/gptid/c05a0cf7-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 133
                create_txg: 4
            children[5]:
                type: 'disk'
                id: 5
                guid: 3842879526121555946
                path: '/dev/gptid/c17bba52-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 132
                create_txg: 4
            children[6]:
                type: 'disk'
                id: 6
                guid: 3687168655134491029
                path: '/dev/gptid/c252e40b-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 131
                create_txg: 4
            children[7]:
                type: 'disk'
                id: 7
                guid: 17022130803820690065
                path: '/dev/gptid/c3823f4f-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 130
                create_txg: 4
        children[2]:
            type: 'raidz'
            id: 2
            guid: 3713932782373410402
            nparity: 2
            metaslab_array: 37
            metaslab_shift: 38
            ashift: 12
            asize: 31989077901312
            is_log: 0
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 15482890838662856639
                path: '/dev/gptid/c4b956ae-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 121
                create_txg: 4
            children[1]:
                type: 'disk'
                id: 1
                guid: 3584002704295252492
                path: '/dev/gptid/c5cde2fe-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 120
                create_txg: 4
            children[2]:
                type: 'disk'
                id: 2
                guid: 6050642064207188956
                path: '/dev/gptid/c6f66ae7-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 119
                create_txg: 4
            children[3]:
                type: 'disk'
                id: 3
                guid: 3826564569797822197
                path: '/dev/gptid/c8164bf5-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 118
                create_txg: 4
            children[4]:
                type: 'disk'
                id: 4
                guid: 10866110603520529971
                path: '/dev/gptid/c90bea93-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 117
                create_txg: 4
            children[5]:
                type: 'disk'
                id: 5
                guid: 3094508051025459402
                path: '/dev/gptid/ca68f866-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 100
                create_txg: 4
            children[6]:
                type: 'disk'
                id: 6
                guid: 516464105230378376
                path: '/dev/gptid/cb7ba68b-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 116
                create_txg: 4
            children[7]:
                type: 'disk'
                id: 7
                guid: 12029508303593454200
                path: '/dev/gptid/cca90030-9fd6-11e6-a373-0007430495a0'
                whole_disk: 1
                DTL: 115
                create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 12428922533447604369
            path: '/dev/gptid/cceac023-9fd6-11e6-a373-0007430495a0'
            whole_disk: 1
            metaslab_array: 36
            metaslab_shift: 31
            ashift: 9
            asize: 400083648512
            is_log: 1
            DTL: 138
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
 

Kuz

Member
Oct 7, 2016
30
18
8
44
whats the downside to the SMR? I thought I read somewhere that the 5tb wouldn't be as desirable if they use that tech
 

Krailor

New Member
Sep 22, 2015
7
6
3
46
whats the downside to the SMR? I thought I read somewhere that the 5tb wouldn't be as desirable if they use that tech
SMR is meant for bulk storage; it does fine with sequential reads/writes but is horrible for random read/writes.

So long as you're just using these drives to store data (backups, media, etc) and don't try to use them as a boot drive, install programs to them, or run VMs off of them you'll be fine.
 
  • Like
Reactions: Kuz

Kuz

Member
Oct 7, 2016
30
18
8
44
Perfect. Yeah I just want some for my 24bay supermicro box. Boot/VMs are on some old SSDs I had lying around. Just looking for mass storage for nas/plex
 

Chris Audi

New Member
Jun 2, 2015
11
1
3
51
After reading this I am tempting to load in my 32 bay drive on Dell Power t630 with either PERC H330 or H730P
 

sth

Active Member
Oct 29, 2015
379
91
28
So having had a fair few problems with really craapy performance under FreeNAS but seeing slightly better perf under Linux I decided to trial a subset of these drives under Napp-it.

Even a single stripe of 8 disks beats out my old 24 drive array.

Code:
                              capacity     operations    bandwidth
pool                       alloc   free   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
rpool                      30.4G  6.62G      0      0      0      0
  c4t2d0s0                 30.4G  6.62G      0      0      0      0
-------------------------  -----  -----  -----  -----  -----  -----
tank                       15.6G  29.0T      0  1.82K      0   233M
  raidz2                   15.6G  29.0T      0  1.82K      0   233M
    c9t5000C5009034AE8Ad0      -      -      0    939      0  40.6M
    c9t5000C5009BB7A250d0      -      -      0    814      0  40.7M
    c9t5000C5009B626FD9d0      -      -      0    866      0  40.6M
    c9t5000C5009BA23EB0d0      -      -      0    869      0  40.3M
    c9t5000C5009BAC7953d0      -      -      0    762      0  40.7M
    c9t5000C5009BB7CECDd0      -      -      0    838      0  40.6M
    c9t5000C5009BBBF0EFd0      -      -      0    877      0  40.7M
    c9t5000C5009BFFAE40d0      -      -      0    899      0  40.6M
-------------------------  -----  -----  -----  -----  -----  -----
More testing to come....
 
  • Like
Reactions: Patrick

sth

Active Member
Oct 29, 2015
379
91
28
Well more testing has thrown another spanner in the wheel.
Under Napp-it my READ performance sucks, but my writes are fine!

Under FreeNAS with 24 disks I see Reads of 1200MB/s, writes of 60MB/s
Under NappIt on the same hardware but with only 8 disks in a single RaidZ2 I see Reads of 10MB/s but writes of 300MB/s

Code:
                              capacity     operations    bandwidth
pool                       alloc   free   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
rpool                      30.7G  6.27G      0     13      0   153K
  c4t2d0s0                 30.7G  6.27G      0     13      0   153K
-------------------------  -----  -----  -----  -----  -----  -----
tank                       24.7G  29.0T    349      0  12.1M      0
  raidz2                   24.7G  29.0T    349      0  12.1M      0
    c9t5000C5009BF6B40Ed0      -      -     58      0  1.21M      0
    c9t5000C5009B626FD9d0      -      -    209      0  1.75M      0
    c9t5000C5009BA23EB0d0      -      -    213      0  1.85M      0
    c9t5000C5009BAC7953d0      -      -    176      0  1.77M      0
    c9t5000C5009BB7A250d0      -      -    172      0  1.66M      0
    c9t5000C5009BB7CECDd0      -      -     61      0  1.27M      0
    c9t5000C5009BBBF0EFd0      -      -     64      0  1.36M      0
    c9t5000C5009BFFAE40d0      -      -     62      0  1.29M      0
logs                           -      -      -      -      -      -
  c2t1d0                    128K   372G      0      0      0      0
cache                          -      -      -      -      -      -
  c1t1d0                   5.98G   739G      0      0      0      0
-------------------------  -----  -----  -----  -----  -----  -----
I'm really struggling to understand what the hell is going on with these disks.
 

cperalt1

Active Member
Feb 23, 2015
180
55
28
43
Looks like your problem is mixed ashift pool. You have one Drive throwing a Spanner in the works being detected as ashift: 9 so something must be something on the reporting of the drive and confirms your inferior write numbers as all your drives minus one are being detected as 4K drives.

Code:
        children[3]:
            type: 'disk'
            id: 3
            guid: 12428922533447604369
            path: '/dev/gptid/cceac023-9fd6-11e6-a373-0007430495a0'
            whole_disk: 1
            metaslab_array: 36
            metaslab_shift: 31
            ashift: 9
            asize: 400083648512
            is_log: 1
            DTL: 138
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
 

sth

Active Member
Oct 29, 2015
379
91
28
Quick update on testing.

- Updated LSI HBA's from 20.00.04.00 to latest 20.00.07.00
- Created new two drive mirror
- re-ran same basic 'dd' tests.

Heres the read and write results (edited for readability) under latest OmniOS & Napp-it (which BTW I'm really liking what gea is doing there)

TLDR; writes fine, read sucks.

Code:
                              capacity     operations    bandwidth
pool                       alloc   free   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
rpool                      30.7G  6.27G      0      0      0      0
  c4t2d0s0                 30.7G  6.27G      0      0      0      0
-------------------------  -----  -----  -----  -----  -----  -----
tank                       2.46G  3.62T      0    855      0   107M
  mirror                   2.46G  3.62T      0    855      0   107M
    c9t5000C5009B626FD9d0      -      -      0    864      0   108M
    c9t5000C5009BA23EB0d0      -      -      0    856      0   107M
-------------------------  -----  -----  -----  -----  -----  -----


                              capacity     operations    bandwidth
pool                       alloc   free   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
rpool                      30.7G  6.27G      0      0      0      0
  c4t2d0s0                 30.7G  6.27G      0      0      0      0
-------------------------  -----  -----  -----  -----  -----  -----
tank                       9.99G  3.62T    781      0  73.6M      0
  mirror                   9.99G  3.62T    781      0  73.6M      0
    c9t5000C5009B626FD9d0      -      -    392      0  37.0M      0
    c9t5000C5009BA23EB0d0      -      -    388      0  36.6M      0
-------------------------  -----  -----  -----  -----  -----  -----
and the zfd info confirming we are running ashift=12 on these drives

Code:
tank:
    version: 5000
    name: 'tank'
    state: 0
    txg: 4
    pool_guid: 1063309618748155760
    hostid: 1781033850
    hostname: 'sc216'
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 1063309618748155760
        create_txg: 4
        children[0]:
            type: 'mirror'
            id: 0
            guid: 5359602738080923202
            metaslab_array: 38
            metaslab_shift: 35
            ashift: 12
            asize: 4000773570560
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 35
            children[0]:
                type: 'disk'
                id: 0
                guid: 15463287561732432854
                path: '/dev/dsk/c9t5000C5009B626FD9d0s0'
                devid: 'id1,sd@n5000c5009b626fd9/a'
                phys_path: '/scsi_vhci/disk@g5000c5009b626fd9:a'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 36
            children[1]:
                type: 'disk'
                id: 1
                guid: 4617305566589157708
                path: '/dev/dsk/c9t5000C5009BA23EB0d0s0'
                devid: 'id1,sd@n5000c5009ba23eb0/a'
                phys_path: '/scsi_vhci/disk@g5000c5009ba23eb0:a'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 37
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
 

Churchill

Admiral
Jan 6, 2016
838
213
43
I purchased 4 of these and put them in my Windows 10 box as a Storage Spaces (Pairity/NTFS) cluster of disks.

I am barely getting above 17M/sec of traffic speed to these drives and they are dog slow on copying. I'm not sure if this is a problem with storage spaces being MEH or my controller (2308 LSI). I'm showing 100% disk usage and slow copy speeds. I have tried copying from an SMB share (which gets over 80MB/s ) and from a USB 3.0 RAID (which gets 40-50 MB/s) both show wait times on the disks.


I am using a 4 2.5 to 1 5.25 bay converter but it's rated for SAS2/3 so I'm not sure if that's the issue.

All disks were run through a clearing/burn in test, no failed sectors or units. Disks are good, I'm seeing massive wait times on these drives.
 

sth

Active Member
Oct 29, 2015
379
91
28
Same here. Ive tried them under FreeNAS (FreeBSD), Linux and Napp-it (Omni-OS) and had pretty terrible perf across all platforms. Ive tried them connected directly to X10SRL-F motherboard ports and 9207 / 9211 cards and not seen much improvement - although things have stabilised with 20.00.07 Gea put me onto. I'm seeing similar high wait times in iostat too.
One problem with the low performance is that FreeNAS kicks drives out the array so on my 24 drive testbed I had 8 UNAVAIL themselves simultaneously. Luckily I caught this before putting this system into production use but as it stands, I say these drives are only good for transferring info around in a USB enclosure which is TBH what they were originally intended for.