Recent content by ekke

  1. E

    optimize zfs for seq reads

    I turned off all lxc and VMs. rebooted. did more testing and read speeds are good now, 100MBps for each drive in zpool iostat -v pve# fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=1 --size=30G --runtime=120 --group_reporting seqread: (g=0): rw=read, bs=(R)...
  2. E

    optimize zfs for seq reads

    Gea, any feedback on this: pve# zpool get cachefile NAME PROPERTY VALUE SOURCE rpool cachefile - default safe cachefile - default speed cachefile - default tank cachefile none local temp cachefile - default none cachefile?! not...
  3. E

    optimize zfs for seq reads

    pve# zpool iostat tank -vw tank total_wait disk_wait syncq_wait asyncq_wait latency read write read write read write read write scrub trim rebuild...
  4. E

    optimize zfs for seq reads

    I have turn off all torrents etc, doing reads. when I stream a film over lan it buffers now and then! something is really off here. iostat shows sub 1% util on all drives
  5. E

    optimize zfs for seq reads

    These are all my zfetch values: pve# cat /sys/module/zfs/parameters/zfetch* 67108864 67108864 2 8 4194304 1 pve#
  6. E

    optimize zfs for seq reads

    pve# zpool get all tank NAME PROPERTY VALUE SOURCE tank size 71.9T - tank capacity 48% - tank altroot -...
  7. E

    Intel X520-DA1 prices have dropped!

    Ebay ....
  8. E

    Intel X520-DA1 prices have dropped!

    I got burned recently on the x520 nics ... :/ bought 5 of them and they wont accept the DAC, got a few other x520 around with no issues on the same DACs
  9. E

    optimize zfs for seq reads

    you got more performance due to more vdevs.
  10. E

    optimize zfs for seq reads

    as you can see from the images, Im using 8 disks in a raidz2, reads via zpool iostat and iostat loads are eqaully spread over the drives. arc/l2arc tuning wont help here, I was looking at mostly zfetch, the readahead value see if that could increase the reads. Yes I got small blocksize set to...
  11. E

    optimize zfs for seq reads

    Hm, so special vdevs can be removed ? (if no small files stored on them)But I do store small files there. well two mirrors are faster. I got total of 3 drives used in diffeent namespaces. done have four drives avaible but I dont see how special vdev can releate to slow reads?
  12. E

    [SOLVED] zpool SPECIAL DEVICE mirror disk add/remove syntax

    should be this one zpool attach xpool ata-KINGSTON_SEDC500M1920G (existing drive in already existing mirror) ata-NEWDRIVEHERE
  13. E

    [SOLVED] zpool SPECIAL DEVICE mirror disk add/remove syntax

    same as ordinry syntax. someone correct me but its like this: zpool remove xpool ata-KINGSTON_SEDC500M1920G (drive to remove) zpool attach xpool ata-KINGSTON_SEDC500M1920G (existing drive in already existing mirror) ata-NEWDRIVEHERE use replace to replace a drive...
  14. E

    optimize zfs for seq reads

    3way mirror for special vdev. slog doesn't need mirror in my usecase. optane highly unlikely to die, very seldom sync writes. cache drives for the rest of data on the drives. all these drives are namspaces from 3 2TB u.2 drives. special drives cant be removed, they are vdev, vdevs cant be...
  15. E

    optimize zfs for seq reads

    the resilver was due to a nvme drive failure