Setup a 6x SSD RAIDz1 array .... and it is SLOW!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TrumanHW

Active Member
Sep 16, 2018
253
34
28
Perhaps the T320 is just too slow for it ... but that seems unlikely, bc my SPINNING drives get faster rates.

I'm getting

~650MBs READ
~300MBs WRITE

(Granted, the speeds are VERY consistent) ... but I could get that performance with a single drive).

RAIDz1 array comprised of 6x EVO 870 4TB SSD (SATA) ....
As in, Read is just barely faster than a single drive's performance.
And Write is half the performance of a single drive.

All transfers were of large (1GB+) video files.

Any idea what the bottleneck might be ...?

If I can't get faster than this with a SATA Flash array .....
Seems like a waste to try to build an NVMe Flash array.

I have an R730 I could retry this on, but, I wasn't CPU limited.

Any help of what I could look into would be appreciated.
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,177
404
83
what would be the controller you are using? (backplane sas2?) Whats your memory setup like *(on mobo/speed & on zfs / whats your free memory)?
(post your zfs pool configuration, and specs its running)

Thats very unusual low performance for zfs raidz1.
 

i386

Well-Known Member
Mar 18, 2016
4,411
1,638
113
35
Germany
In the past consumer ssds showed that behavior: They were "fast" and reached the numbers that they were advertised with (up to xx k random io/up to xxx MByte/s) but performance dropped once the onboard/slc cache was full (aka "sustained workload"). Sometimes to a level comparable to spinning rust...

What are the temperatures of the ssds? (I don't think that should be a problem with that system unless the fans were modded or the bmc/ipmi stuff changed)
 

TrumanHW

Active Member
Sep 16, 2018
253
34
28
In the past consumer ssds showed that behavior: They were "fast" and reached the numbers that they were advertised with (up to xx k random io/up to xxx MByte/s) but performance dropped once the onboard/slc cache was full (aka "sustained workload"). Sometimes to a level comparable to spinning rust...

What are the temperatures of the ssds? (I don't think that should be a problem with that system unless the fans were modded or the bmc/ipmi stuff changed)


I'm with you on that, but these numbers are from the first second to the last of a 150GB transfer.
No slowdowns over time (which is obviously diagnostically useful and I'd def. state as much).

It has 48 GB ECC RAM at 1600 MHz ...

Also checked the perf .... nothing wsa hotter than about 40 C.

Hell, the same EXACT computer with 7200rpm drives gets almost identical R/W.
Grated,
 

TrumanHW

Active Member
Sep 16, 2018
253
34
28
PS -- it may be worth nothing that not a one one of the files transfered were under a 1GB
 

ano

Well-Known Member
Nov 7, 2022
698
298
63
what's your benchmark testing them?

sometimes sata does weird stuff, consumer drives even more so
 

TrumanHW

Active Member
Sep 16, 2018
253
34
28
I did a copy using a program that shows the MB/s .... here's the pictures (I just found some video from TV shows that I could use to copy around) ....

WRITING to the NAS: ~380 MBs (perhaps slower than using 8x 4TB 7200 spinners in RAIDz2)

READING to the MBP: ~660 MBs (again slower than reading from 8x 4TB 7200 spinners in RAIDz2)


Next, I'll make a STRIPED ARRAY
To test the speed the SSD array gets in this.
 

Attachments

ano

Well-Known Member
Nov 7, 2022
698
298
63
yeah, smb/cifs from a mac is more realworld testing then a benchmark, do a fio, see whats possible

pic of fio test @BackupProphet and strong? my record is 19.5GB/s on 128K 100% random writes for nvme devices on zfs, 50GB testfile
 

mattventura

Well-Known Member
Nov 9, 2022
505
257
63
Do an FIO test, also check that you used the correct ashift value for your block size. You can also check with both `iostat` and `zpool iostat` to see if there's a bottleneck on the storage side.
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,177
404
83
please do following and post result

(to give us idea how you created it, and what settings it has)
zpool history -i
zfs get all [poolname]

when transferring files run (and post results)
cat /proc/spl/kstat/zfs/arcstats

also post how much free memory your system has during the copy/write process.


next please do respond my original question, on the controller used...
Is this running with some sort of writeback/writethrough cache from controller?
 

TrumanHW

Active Member
Sep 16, 2018
253
34
28
Use fio to benchmark ssds, raid etc. Anything else will give you bad numbers. I easily get 23GB/s on 4 nvme(f320) striped with ZFS by using fio with 20 threads

I hate to ask, but do you have a terminal command I can copy and paste ?

I literally couldn't find a single good explanation of how to formulate the command. Thanks
 

mattventura

Well-Known Member
Nov 9, 2022
505
257
63
Try this:

Code:
fio --filename=/dev/whatever --bs=64k --ioengine=libaio --iodepth=64 --runtime=30 --numjobs=16 --time_based --group_reporting --eta-newline=2 --rw=read --readonly --direct=1 --name=test-job
Change `read` to `write` (requires removing --readonly, i.e. will nuke data), `randread`, `randwrite`, `rw` or `randrw`. --bs block size, --iodepth is queue depth, --numjobs is effectively the parallel threat count.
 

TrumanHW

Active Member
Sep 16, 2018
253
34
28
please post the following result:
zpool history -i

2023-04-07.14:36:27 zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2023-04-07.14:36:38 [txg:10435] set boot-pool/.system (1191) acltype=0
2023-04-07.14:36:38 zfs set acltype=off boot-pool/.system
2023-04-07.14:38:48 [txg:10493] destroy boot-pool/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828 (1424) (bptree, mintxg=1)
2023-04-07.14:38:48 [txg:10494] destroy boot-pool/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828 (943) (bptree, mintxg=1)
2023-04-07.14:38:48 [txg:10495] destroy boot-pool/.system/webui (173) (bptree, mintxg=1)
2023-04-07.14:38:48 [txg:10496] destroy boot-pool/.system/cores (79) (bptree, mintxg=1)
2023-04-07.14:38:48 [txg:10497] destroy boot-pool/.system/samba4 (159) (bptree,mintxg=1)
2023-04-07.14:38:48 [txg:10498] destroy boot-pool/.system/configs-bd26c8fd36fd4618a75dffa52c04f828


zfs get all [poolname]
-
root@truenas[/]# zfs get all STRIPE
NAME PROPERTY VALUE SOURCE
STRIPE type filesystem -
STRIPE creation Fri Apr 7 14:38 2023 -
STRIPE used 189G -
STRIPE available 21.4T -
STRIPE referenced 189G -
STRIPE compressratio 1.00x -
STRIPE mounted yes -
STRIPE quota none default
STRIPE reservation none default
STRIPE recordsize 128K default
STRIPE mountpoint /mnt/STRIPE default
STRIPE sharenfs off default
STRIPE checksum on default
STRIPE compression off local
STRIPE atime off local
STRIPE devices on default
STRIPE exec on default
STRIPE setuid on default
STRIPE readonly off default
STRIPE jailed off default
STRIPE snapdir hidden default
STRIPE aclmode passthrough local
STRIPE aclinherit passthrough local
STRIPE createtxg 1 -
STRIPE canmount on default
STRIPE xattr on default
STRIPE copies 1 local
STRIPE version 5 -
STRIPE utf8only off -
STRIPE normalization none -
STRIPE casesensitivity sensitive -
STRIPE vscan off default
STRIPE nbmand off default
STRIPE sharesmb off default
STRIPE refquota none default
STRIPE refreservation none default
STRIPE guid 12179713466807502279 -
STRIPE primarycache all default
STRIPE secondarycache all default
STRIPE usedbysnapshots 0B -
STRIPE usedbydataset 189G -
STRIPE usedbychildren 129M -
STRIPE usedbyrefreservation 0B -
STRIPE logbias latency default
STRIPE objsetid 54 -
STRIPE dedup off default
STRIPE mlslabel none default
STRIPE sync standard default
STRIPE dnodesize legacy default
STRIPE refcompressratio 1.00x -
STRIPE written 189G -
STRIPE logicalused 189G -
STRIPE logicalreferenced 189G -
STRIPE volmode default default
STRIPE filesystem_limit none default
STRIPE snapshot_limit none default
STRIPE filesystem_count none default
STRIPE snapshot_count none default
STRIPE snapdev hidden default
STRIPE acltype nfsv4 default
STRIPE context none default
STRIPE fscontext none default
STRIPE defcontext none default
STRIPE rootcontext none default
STRIPE relatime off default
STRIPE redundant_metadata all default
STRIPE overlay



when transferring files run (and post results)
cat /proc/spl/kstat/zfs/arcstats

I'm not sure what you meant by
cat /
proc = processor
spl /
kstat /
zfs = ZFS tab of reports
Arcstats = Also on ZFS reports tab..?


Post how much free RAM the system has during the copy/write process.
~4GB Free ... But I'm the only person using the system and is usually has 32GB.
I'm the only user testing it, having it perform 1 task (either 100% R or 100% W).



Is this running a writeback/writethrough cache from controller?
No sir: HP 9205i (an OEM's LSI ) in IT mode ...
 

mattventura

Well-Known Member
Nov 9, 2022
505
257
63
Hmm, if you didn't specify an ashift value when creating the pool, it's possible it could be wrong.

Also, for that matter, try read-only fio tests on the individual drives as well.
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,177
404
83
ok last questions

zpool status
fdisk -l

(i'm only interested in ones listed by zpool status)

That cat command, is one thing you paste into your terminal. Should show arc cache usage.
Code:
root@KVMHost:~#  cat /proc/spl/kstat/zfs/arcstats

13 1 0x01 123 33456 176374573648 701004885147559

name                            type data

hits                            4    70836143

misses                          4    2275792

demand_data_hits                4    15022805

demand_data_misses              4    437019

demand_metadata_hits            4    55002204

demand_metadata_misses          4    95138

prefetch_data_hits              4    650034

prefetch_data_misses            4    972498

prefetch_metadata_hits          4    161100

prefetch_metadata_misses        4    771137

mru_hits                        4    16725621

mru_ghost_hits                  4    74044

mfu_hits                        4    53468879

mfu_ghost_hits                  4    113430

deleted                         4    6015039

mutex_miss                      4    270

access_skip                     4    3

evict_skip                      4    552

evict_not_enough                4    5

evict_l2_cached                 4    111233481728

evict_l2_eligible               4    645883289088

evict_l2_eligible_mfu           4    41221089280

evict_l2_eligible_mru           4    604662199808

evict_l2_ineligible             4    84395954176

evict_l2_skip                   4    0

hash_elements                   4    15253751

hash_elements_max               4    15630373

hash_collisions                 4    10236610

hash_chains                     4    2575575

hash_chain_max                  4    8

p                               4    32121285632

c                               4    34359738368

c_min                           4    8450541952

c_max                           4    34359738368

size                            4    34173446592

compressed_size                 4    30772676096

uncompressed_size               4    32109660160

overhead_size                   4    1298173952

hdr_size                        4    179296640

data_size                       4    31425387520

metadata_size                   4    645462528

dbuf_size                       4    116550528

dnode_size                      4    303789088

bonus_size                      4    89743040

anon_size                       4    3067904

anon_evictable_data             4    0

anon_evictable_metadata         4    0

mru_size                        4    28400167936

mru_evictable_data              4    27895302656

mru_evictable_metadata          4    7346688

mru_ghost_size                  4    5026174464

mru_ghost_evictable_data        4    3480724992

mru_ghost_evictable_metadata    4    1545449472

mfu_size                        4    3667614208

mfu_evictable_data              4    2096401920

mfu_evictable_metadata          4    27991552

mfu_ghost_size                  4    29328157184

mfu_ghost_evictable_data        4    10829726720

mfu_ghost_evictable_metadata    4    18498430464

l2_hits                         4    44860

l2_misses                       4    490489

l2_prefetch_asize               4    262586368

l2_mru_asize                    4    1476956499968

l2_mfu_asize                    4    442823114752

l2_bufc_data_asize              4    1918897631232

l2_bufc_metadata_asize          4    1144569856

l2_feeds                        4    688441

l2_rw_clash                     4    0

l2_read_bytes                   4    354184192

l2_write_bytes                  4    102120412160

l2_writes_sent                  4    43189

l2_writes_done                  4    43189

l2_writes_error                 4    0

l2_writes_lock_retry            4    3

l2_evict_lock_retry             4    1

l2_evict_reading                4    0

l2_evict_l1cached               4    4259

l2_free_on_write                4    0

l2_abort_lowmem                 4    0

l2_cksum_bad                    4    0

l2_io_error                     4    0

l2_size                         4    1934708814336

l2_asize                        4    1920028995584

l2_hdr_size                     4    1411131360

l2_log_blk_writes               4    775

l2_log_blk_avg_asize            4    12290

l2_log_blk_asize                4    189755392

l2_log_blk_count                4    14481

l2_data_to_meta_ratio           4    10826

l2_rebuild_success              4    1

l2_rebuild_unsupported          4    0

l2_rebuild_io_errors            4    0

l2_rebuild_dh_errors            4    0

l2_rebuild_cksum_lb_errors      4    0

l2_rebuild_lowmem               4    0

l2_rebuild_size                 4    1856068488704

l2_rebuild_asize                4    1841359478784

l2_rebuild_bufs                 4    14194558

l2_rebuild_bufs_precached       4    1405

l2_rebuild_log_blks             4    13889

memory_throttle_count           4    0

memory_direct_count             4    0

memory_indirect_count           4    0

memory_all_bytes                4    270417342464

memory_free_bytes               4    170048356352

memory_available_bytes          3    161098614400

arc_no_grow                     4    0

arc_tempreserve                 4    0

arc_loaned_bytes                4    0

arc_prune                       4    0

arc_meta_used                   4    2745973184

arc_meta_limit                  4    25769803776

arc_dnode_limit                 4    2576980377

arc_meta_max                    4    16463642176

arc_meta_min                    4    16777216

async_upgrade_sync              4    56917

demand_hit_predictive_prefetch  4    901790

demand_hit_prescient_prefetch   4    12337905

arc_need_free                   4    0

arc_sys_free                    4    8949741952

arc_raw_size                    4    0

cached_only_in_progress         4    0

This pool was created by os setup, its also your boot?

zpool history -i [poolname]


Additional comments:
This sas2 controller doesn't really support SSD's; its not meant to be used with SSDs. (but thats fine.) Are you using both links to backplane?
Your zfs option zfsbydataset is quite large.
 
Last edited:

CyklonDX

Well-Known Member
Nov 8, 2022
1,177
404
83
I would recommend locking your zfs memory usage to specific amount; I would recommend half the memory on the box.

you should create following:
/etc/modprobe.d/zfs.conf

with following inside: (This limits to 24G of ram)

Code:
options zfs zfs_arc_max=25769803776
options zfs zfs_arc_min=25769803775

if your root is on zfs you must update your intramfs each time this changes.
update-initramfs -u -k all

and reboot to activate the change.


Next you should enable lz4 compression, sandy bridge cpu's are fast enough, so you would gain performance with cost of little more cpu usage.
What system is this? FreeNAS?


i would recommend getting sas3 controller (they aren't expensive, 9300-8i), with mini sas 36pin sff-8087 to mini sas sff-8643 cables to connect to your backplane; Or if you prefer to stay on sas2 controller, lsi 9217-8i, those are optimized for SSD's - but get really hot - so a fan would be recommended for your case.

Can you run internal test on your performance?
dd if=/dev/zero of=/WHATEVER_PATH_TO_ZFS/testlarge.img bs=32G count=16 oflag=dsync
dd if=/dev/zero of=/WHATEVER_PATH_TO_ZFS/testsmall.img bs=1M count=4096 oflag=dsync

(adjust the path to be on your zfs storage)
 
Last edited:

TrumanHW

Active Member
Sep 16, 2018
253
34
28
ok last questions
zpool status




root@truenas[~]# zpool status
pool: STRIPE
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
STRIPE ONLINE 0 0 0
gptid/8f84048b-d58c-11ed-8f31-0060dd45819a ONLINE 0 0 0
gptid/8fa4d46f-d58c-11ed-8f31-0060dd45819a ONLINE 0 0 0
gptid/8f769389-d58c-11ed-8f31-0060dd45819a ONLINE 0 0 0
gptid/8f7944a2-d58c-11ed-8f31-0060dd45819a ONLINE 0 0 0
gptid/8fa86680-d58c-11ed-8f31-0060dd45819a ONLINE 0 0 0
gptid/8fa05683-d58c-11ed-8f31-0060dd45819a ONLINE 0 0 0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:07 with 0 errors on Fri Apr 7 03:45:07 2023
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
ada0p2 0 0 0

errors: No known errors






root@truenas[ - ] # zpool history -i STRIPE
pool: STRIPE
state: ONLINE
config:
root@truenas[~]# zpool history -i STRIPE
History for 'STRIPE':
2023-04-07.14:38:40 [txg:4] create pool version 5000; software version zfs-9ef0b
67f8; uts truenas.local 13.1-RELEASE-p7 1301000 amd64
2023-04-07.14:38:40 [txg:4] set feature@lz4_compress=enabled
2023-04-07.14:38:40 [txg:4] set failmode=1
2023-04-07.14:38:40 [txg:4] set autoexpand=1
2023-04-07.14:38:40 [txg:4] set ashift=12
2023-04-07.14:38:40 [txg:4] set feature@async_destroy=enabled
2023-04-07.14:38:40 [txg:4] set feature@empty_bpobj=enabled
2023-04-07.14:38:40 [txg:4] set feature@multi_vdev_crash_dump=enabled
2023-04-07.14:38:40 [txg:4] set feature@spacemap_histogram=enabled
2023-04-07.14:38:40 [txg:4] set feature@enabled_txg=enabled
2023-04-07.14:38:40 [txg:4] set feature@hole_birth=enabled
2023-04-07.14:38:40 [txg:4] set feature@extensible_dataset=enabled
2023-04-07.14:38:40 [txg:4] set feature@embedded_data=enabled
2023-04-07.14:38:40 [txg:4] set feature@bookmarks=enabled
2023-04-07.14:38:40 [txg:4] set feature@filesystem_limits=enabled
2023-04-07.14:38:40 [txg:4] set feature@large_blocks=enabled
2023-04-07.14:38:40 [txg:4] set feature@large_dnode=enabled
2023-04-07.14:38:40 [txg:4] set feature@sha512=enabled
2023-04-07.14:38:40 [txg:4] set feature@skein=enabled
2023-04-07.14:38:40 [txg:4] set feature@userobj_accounting=enabled
2023-04-07.14:38:40 [txg:4] set feature@encryption=enabled
2023-04-07.14:38:40 [txg:4] set feature@project_quota=enabled
2023-04-07.14:38:40 [txg:4] set feature@device_removal=enabled
2023-04-07.14:38:40 [txg:4] set feature@obsolete_counts=enabled
2023-04-07.14:38:40 [txg:4] set feature@zpool_checkpoint=enabled
2023-04-07.14:38:40 [txg:4] set feature@spacemap_v2=enabled
2023-04-07.14:38:40 [txg:4] set feature@allocation_classes=enabled
2023-04-07.14:38:40 [txg:4] set feature@resilver_defer=enabled
2023-04-07.14:38:40 [txg:4] set feature@bookmark_v2=enabled
2023-04-07.14:38:40 [txg:4] set feature@redaction_bookmarks=enabled
2023-04-07.14:38:40 [txg:4] set feature@redacted_datasets=enabled
2023-04-07.14:38:40 [txg:4] set feature@bookmark_written=enabled
2023-04-07.14:38:40 [txg:4] set feature@log_spacemap=enabled
2023-04-07.14:38:40 [txg:4] set feature@livelist=enabled
2023-04-07.14:38:40 [txg:4] set feature@device_rebuild=enabled
2023-04-07.14:38:40 [txg:4] set feature@zstd_compress=enabled
2023-04-07.14:38:40 [txg:4] set feature@draid=enabled
2023-04-07.14:38:40 [txg:5] set STRIPE (54) atime=0
2023-04-07.14:38:40 [txg:5] set STRIPE (54) compression=15
2023-04-07.14:38:40 [txg:5] set STRIPE (54) aclinherit=3
2023-04-07.14:38:40 [txg:5] set STRIPE (54) mountpoint=/STRIPE
2023-04-07.14:38:40 [txg:5] set STRIPE (54) aclmode=3
2023-04-07.14:38:40 zpool create -o feature@lz4_compress=enabled -o altroot=/mnt -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -o ashift12 -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@multi_vdev_crash_dump=enabled -o feature@spacemap_histogram=enabled -o feature@enabled_txg=enabled -o feature@hole_birth=enabled -o feature@extensible_dataset=enabled -o feature@embedded_data=enabled -o feature@bookmarks=enabled -o feature@filesystem_limits=enabled -o feature@large_blocks=enabled -o feature@large_dnode=enabled -o feature@sha512=enabled -o feature@skein=enabled -o feature@userobj_accounting=enabled -o feature@encryption=enabled -o feature@project_quota=enabled -o feature@device_removal=enabled -o feature@obsolete_counts=enabled -o feature@zpool_checkpoint=enabled -o feature@spacemap_v2=enabled -o feature@allocation_classes=enabled -o feature@resilver_defer=enabled -o feature@bookmark_v2=enabled -o feature@redaction_bookmarks=enabled -o feature@redacted_datasets=enabled -o feature@bookmark_written=enabled -o feature@log_spacemap=enabled -o feature@livelist=enabled -o feature@device_rebuild=enabled -o feature@zstd_compress=enabled -o feature@draid=enabled -O atime=off -O compression=lz4 -O aclinherit=passthrough -O mountpoint=/STRIPE -O aclmode=passthrough STRIPE /dev/gptid/8f84048b-d58c-11ed-8f31-0060dd45819a /dev/gptid/8fa4d46f-d58c-11ed-8f31-0060dd45819a /dev/gptid/8f769389-d58c-11ed-8f31-0060dd45819a /dev/gptid/8f7944a2-d58c-11ed-8f31-0060dd45819a /dev/gptid/8fa86680-d58c-11ed-8f31-0060dd45819a /dev/gptid/8fa05683-d58c-11ed-8f31-0060dd45819a
2023-04-07.14:38:40 [txg:6] inherit STRIPE (54) mountpoint=/
2023-04-07.14:38:40 zfs inherit STRIPE
2023-04-07.14:38:41 [txg:10] create STRIPE/.system (144)
2023-04-07.14:38:41 [txg:11] set STRIPE/.system (144) mountpoint=legacy
2023-04-07.14:38:41 [txg:11] set STRIPE/.system (144) readonly=0
2023-04-07.14:38:41 (22ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off STRIPE/.system
2023-04-07.14:38:41 [txg:12] create STRIPE/.system/cores (643)
2023-04-07.14:38:41 [txg:13] set STRIPE/.system/cores (643) quota=1073741824
2023-04-07.14:38:41 [txg:14] set STRIPE/.system/cores (643) mountpoint=legacy
2023-04-07.14:38:41 [txg:14] set STRIPE/.system/cores (643) readonly=0
2023-04-07.14:38:41 (24ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0
quota: 1073741824

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off -o quota=1G STRIPE/.system/cores
2023-04-07.14:38:41 [txg:15] create STRIPE/.system/samba4 (899)
2023-04-07.14:38:41 [txg:16] set STRIPE/.system/samba4 (899) mountpoint=legacy
2023-04-07.14:38:41 [txg:16] set STRIPE/.system/samba4 (899) readonly=0
2023-04-07.14:38:41 (18ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off STRIPE/.system/samba4
2023-04-07.14:38:41 [txg:17] create STRIPE/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828 (260)
2023-04-07.14:38:41 [txg:18] set STRIPE/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828 (260) mountpoint=legacy
2023-04-07.14:38:41 [txg:18] set STRIPE/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828 (260) readonly=0
2023-04-07.14:38:41 (18ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off STRIPE/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828
2023-04-07.14:38:41 [txg:19] create STRIPE/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828 (517)
2023-04-07.14:38:41 [txg:20] set STRIPE/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828 (517) mountpoint=legacy
2023-04-07.14:38:41 [txg:20] set STRIPE/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828 (517) readonly=0
2023-04-07.14:38:41 (18ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off STRIPE/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828
2023-04-07.14:38:41 [txg:21] create STRIPE/.system/configs-bd26c8fd36fd4618a75dffa52c04f828 (772)
2023-04-07.14:38:41 [txg:22] set STRIPE/.system/configs-bd26c8fd36fd4618a75dffa52c04f828 (772) mountpoint=legacy
2023-04-07.14:38:41 [txg:22] set STRIPE/.system/configs-bd26c8fd36fd4618a75dffa52c04f828 (772) readonly=0
2023-04-07.14:38:41 (18ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off STRIPE/.system/configs-bd26c8fd36fd4618a75dffa52c04f828
2023-04-07.14:38:41 [txg:23] create STRIPE/.system/webui (524)
2023-04-07.14:38:41 [txg:24] set STRIPE/.system/webui (524) mountpoint=legacy
2023-04-07.14:38:41 [txg:24] set STRIPE/.system/webui (524) readonly=0
2023-04-07.14:38:41 (19ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off STRIPE/.system/webui
2023-04-07.14:38:41 [txg:25] create STRIPE/.system/services (906)
2023-04-07.14:38:41 [txg:26] set STRIPE/.system/services (906) mountpoint=legacy
2023-04-07.14:38:41 [txg:26] set STRIPE/.system/services (906) readonly=0
2023-04-07.14:38:41 (18ms) ioctl create
input:
type: 2
props:
mountpoint: 'legacy'
readonly: 0

2023-04-07.14:38:41 zfs create -o mountpoint=legacy -o readonly=off STRIPE/.system/services
2023-04-07.14:38:52 [txg:61] set STRIPE/.system (144) acltype=0
2023-04-07.14:38:52 zfs set acltype=off STRIPE/.system
2023-04-07.14:42:51 [txg:108] create STRIPE/DATUHDATUH (1411)
2023-04-07.14:42:51 [txg:109] set STRIPE/DATUHDATUH (1411) quota=0
2023-04-07.14:42:51 [txg:110] set STRIPE/DATUHDATUH (1411) refquota=0
2023-04-07.14:42:51 [txg:111] set STRIPE/DATUHDATUH (1411) refreservation=0
2023-04-07.14:42:51 [txg:112] set STRIPE/DATUHDATUH (1411) reservation=0
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) aclmode=3
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) atime=0
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) compression=2
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) copies=1
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) org.truenas:managedby=10.0.184.111
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) sync=2
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) xattr=2
2023-04-07.14:42:51 [txg:113] set STRIPE/DATUHDATUH (1411) special_small_blocks=0
2023-04-07.14:42:51 (48ms) ioctl create
input:
type: 2
props:
aclmode: 3
atime: 0
casesensitivity: 0
compression: 2
copies: 1
org.truenas:managedby: '10.0.184.111'
quota: 0
refquota: 0
refreservation: 0
reservation: 0
sync: 2
xattr: 2
special_small_blocks: 0

2023-04-07.14:42:51 zfs create -o aclmode=passthrough -o atime=off -o casesensitivity=sensitive -o compression=off -o copies=1 -o org.truenas:managedby=10.0.184.111 -o quota=none -o refquota=none -o refreservation=none -o reservation=none-o sync=disabled -o xattr=sa -o special_small_blocks0 STRIPE/D

















What command is this ..? There's an "i" for fdisk, but I couldn't find an "L"
(upper or lowercase, though I know you typed lowercase)



SAS2 controllers don't support SSD's...?
But SSD controllers always emulate being spinning drive.


Are you using both links to backplane?
Of course ... otherwise I couldn't connect 6 devs do it ... :)


I don't know what you mean by: zfsbydataset is quite large.
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,177
404
83
From the looks this isn't raidz1, and i can't say from this what it is.

(please run 'zfs version')

In my case:
zfs-2.1.4-1
zfs-kmod-2.1.4-1

This is how raidz1's should looks like

1681017497124.png

this is example of performance (on this setup)

1681017993662.png


But SSD controllers always emulate being spinning drive.
Sure, but they aren't and controller thinks they are spinning rust, resulting in less performance.