I just don't like FreeNAS (Hopefully FreeNAS 10 will change my opinion). In all seriousness, I believe ZoL is more aggressive at tuning & enabling features for performance by default. ZoL is more likely to select ashift=12 by default at the cost of space efficiency, for example.This is the first time I have ever seen someone suggest that ZFS on Linux might be faster than on FreeBSD or Solaris. That's new.
Thinking about it a bit more, consider that with a 3 x 8xRAIDZ2, each "block" of data is being split three times (once per vdev) and then again six times (once per "data disk"). Compared to, say, a single 6xRAIDZ2, this amplifies the chance of ZFS writing a "small" block (smaller than 4K) to the disks even if your write workload is mostly "large" blocks. Reading would be less affected, as @sth seems to be seeing.But the suggestion to open a shell on your FreeNAS box and check "zdb | grep ashift" is a good idea.
zdb -U /data/zfs/zpool.cache
TANK:
version: 5000
name: 'TANK'
state: 0
txg: 40239
pool_guid: 16507166921472422085
hostid: 1427528986
hostname: 'freenas826.local.lan'
vdev_children: 4
vdev_tree:
type: 'root'
id: 0
guid: 16507166921472422085
create_txg: 4
children[0]:
type: 'raidz'
id: 0
guid: 17901899365956536076
nparity: 2
metaslab_array: 41
metaslab_shift: 38
ashift: 12
asize: 31989077901312
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 16683411425074944316
path: '/dev/gptid/b0103130-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 129
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 3808633636220662989
path: '/dev/gptid/b1858301-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 128
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 6865155263484773987
path: '/dev/gptid/b30755c8-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 127
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 3129195038383739140
path: '/dev/gptid/b4656e96-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 126
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 16851594806130473045
path: '/dev/gptid/b5e124f4-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 125
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 8637982848107491375
path: '/dev/gptid/b78fe5a3-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 124
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 10625254421541289661
path: '/dev/gptid/b91595e3-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 123
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 4531785168878392080
path: '/dev/gptid/ba27b19f-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 122
create_txg: 4
children[1]:
type: 'raidz'
id: 1
guid: 5080360278934488425
nparity: 2
metaslab_array: 39
metaslab_shift: 38
ashift: 12
asize: 31989077901312
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 16246573185704867732
path: '/dev/gptid/bb54b1e9-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 137
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 11680999326859724467
path: '/dev/gptid/bc5c2efa-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 136
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 18153370949237993608
path: '/dev/gptid/bdc3c421-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 135
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 15428781047789136941
path: '/dev/gptid/bee515ab-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 134
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 12740329624770822338
path: '/dev/gptid/c05a0cf7-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 133
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 3842879526121555946
path: '/dev/gptid/c17bba52-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 132
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 3687168655134491029
path: '/dev/gptid/c252e40b-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 131
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 17022130803820690065
path: '/dev/gptid/c3823f4f-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 130
create_txg: 4
children[2]:
type: 'raidz'
id: 2
guid: 3713932782373410402
nparity: 2
metaslab_array: 37
metaslab_shift: 38
ashift: 12
asize: 31989077901312
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 15482890838662856639
path: '/dev/gptid/c4b956ae-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 121
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 3584002704295252492
path: '/dev/gptid/c5cde2fe-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 120
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 6050642064207188956
path: '/dev/gptid/c6f66ae7-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 119
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 3826564569797822197
path: '/dev/gptid/c8164bf5-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 118
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 10866110603520529971
path: '/dev/gptid/c90bea93-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 117
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 3094508051025459402
path: '/dev/gptid/ca68f866-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 100
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 516464105230378376
path: '/dev/gptid/cb7ba68b-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 116
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 12029508303593454200
path: '/dev/gptid/cca90030-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
DTL: 115
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 12428922533447604369
path: '/dev/gptid/cceac023-9fd6-11e6-a373-0007430495a0'
whole_disk: 1
metaslab_array: 36
metaslab_shift: 31
ashift: 9
asize: 400083648512
is_log: 1
DTL: 138
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
Amazon has them scheduled for release 11/29 for $160.any signs of an updated version with the 5tb 2.5 barracuda that launched yet?
SMR is meant for bulk storage; it does fine with sequential reads/writes but is horrible for random read/writes.whats the downside to the SMR? I thought I read somewhere that the 5tb wouldn't be as desirable if they use that tech
capacity operations bandwidth
pool alloc free read write read write
------------------------- ----- ----- ----- ----- ----- -----
rpool 30.4G 6.62G 0 0 0 0
c4t2d0s0 30.4G 6.62G 0 0 0 0
------------------------- ----- ----- ----- ----- ----- -----
tank 15.6G 29.0T 0 1.82K 0 233M
raidz2 15.6G 29.0T 0 1.82K 0 233M
c9t5000C5009034AE8Ad0 - - 0 939 0 40.6M
c9t5000C5009BB7A250d0 - - 0 814 0 40.7M
c9t5000C5009B626FD9d0 - - 0 866 0 40.6M
c9t5000C5009BA23EB0d0 - - 0 869 0 40.3M
c9t5000C5009BAC7953d0 - - 0 762 0 40.7M
c9t5000C5009BB7CECDd0 - - 0 838 0 40.6M
c9t5000C5009BBBF0EFd0 - - 0 877 0 40.7M
c9t5000C5009BFFAE40d0 - - 0 899 0 40.6M
------------------------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
------------------------- ----- ----- ----- ----- ----- -----
rpool 30.7G 6.27G 0 13 0 153K
c4t2d0s0 30.7G 6.27G 0 13 0 153K
------------------------- ----- ----- ----- ----- ----- -----
tank 24.7G 29.0T 349 0 12.1M 0
raidz2 24.7G 29.0T 349 0 12.1M 0
c9t5000C5009BF6B40Ed0 - - 58 0 1.21M 0
c9t5000C5009B626FD9d0 - - 209 0 1.75M 0
c9t5000C5009BA23EB0d0 - - 213 0 1.85M 0
c9t5000C5009BAC7953d0 - - 176 0 1.77M 0
c9t5000C5009BB7A250d0 - - 172 0 1.66M 0
c9t5000C5009BB7CECDd0 - - 61 0 1.27M 0
c9t5000C5009BBBF0EFd0 - - 64 0 1.36M 0
c9t5000C5009BFFAE40d0 - - 62 0 1.29M 0
logs - - - - - -
c2t1d0 128K 372G 0 0 0 0
cache - - - - - -
c1t1d0 5.98G 739G 0 0 0 0
------------------------- ----- ----- ----- ----- ----- -----
Code:children[3]: type: 'disk' id: 3 guid: 12428922533447604369 path: '/dev/gptid/cceac023-9fd6-11e6-a373-0007430495a0' whole_disk: 1 metaslab_array: 36 metaslab_shift: 31 ashift: 9 asize: 400083648512 is_log: 1 DTL: 138 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data
capacity operations bandwidth
pool alloc free read write read write
------------------------- ----- ----- ----- ----- ----- -----
rpool 30.7G 6.27G 0 0 0 0
c4t2d0s0 30.7G 6.27G 0 0 0 0
------------------------- ----- ----- ----- ----- ----- -----
tank 2.46G 3.62T 0 855 0 107M
mirror 2.46G 3.62T 0 855 0 107M
c9t5000C5009B626FD9d0 - - 0 864 0 108M
c9t5000C5009BA23EB0d0 - - 0 856 0 107M
------------------------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool alloc free read write read write
------------------------- ----- ----- ----- ----- ----- -----
rpool 30.7G 6.27G 0 0 0 0
c4t2d0s0 30.7G 6.27G 0 0 0 0
------------------------- ----- ----- ----- ----- ----- -----
tank 9.99G 3.62T 781 0 73.6M 0
mirror 9.99G 3.62T 781 0 73.6M 0
c9t5000C5009B626FD9d0 - - 392 0 37.0M 0
c9t5000C5009BA23EB0d0 - - 388 0 36.6M 0
------------------------- ----- ----- ----- ----- ----- -----
tank:
version: 5000
name: 'tank'
state: 0
txg: 4
pool_guid: 1063309618748155760
hostid: 1781033850
hostname: 'sc216'
com.delphix:has_per_vdev_zaps
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 1063309618748155760
create_txg: 4
children[0]:
type: 'mirror'
id: 0
guid: 5359602738080923202
metaslab_array: 38
metaslab_shift: 35
ashift: 12
asize: 4000773570560
is_log: 0
create_txg: 4
com.delphix:vdev_zap_top: 35
children[0]:
type: 'disk'
id: 0
guid: 15463287561732432854
path: '/dev/dsk/c9t5000C5009B626FD9d0s0'
devid: 'id1,sd@n5000c5009b626fd9/a'
phys_path: '/scsi_vhci/disk@g5000c5009b626fd9:a'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 36
children[1]:
type: 'disk'
id: 1
guid: 4617305566589157708
path: '/dev/dsk/c9t5000C5009BA23EB0d0s0'
devid: 'id1,sd@n5000c5009ba23eb0/a'
phys_path: '/scsi_vhci/disk@g5000c5009ba23eb0:a'
whole_disk: 1
create_txg: 4
com.delphix:vdev_zap_leaf: 37
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data