I turned off all lxc and VMs. rebooted.
did more testing and read speeds are good now, 100MBps for each drive in zpool iostat -v
pve# fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=1 --size=30G --runtime=120 --group_reporting
seqread: (g=0): rw=read, bs=(R)...
I have turn off all torrents etc, doing reads.
when I stream a film over lan it buffers now and then! something is really off here.
iostat shows sub 1% util on all drives
I got burned recently on the x520 nics ... :/ bought 5 of them and they wont accept the DAC, got a few other x520 around with no issues on the same DACs
as you can see from the images, Im using 8 disks in a raidz2, reads via zpool iostat and iostat loads are eqaully spread over the drives.
arc/l2arc tuning wont help here, I was looking at mostly zfetch, the readahead value see if that could increase the reads.
Yes I got small blocksize set to...
Hm, so special vdevs can be removed ? (if no small files stored on them)But I do store small files there.
well two mirrors are faster.
I got total of 3 drives used in diffeent namespaces. done have four drives avaible
but I dont see how special vdev can releate to slow reads?
same as ordinry syntax. someone correct me but its like this:
zpool remove xpool ata-KINGSTON_SEDC500M1920G (drive to remove)
zpool attach xpool ata-KINGSTON_SEDC500M1920G (existing drive in already existing mirror) ata-NEWDRIVEHERE
use replace to replace a drive...
3way mirror for special vdev.
slog doesn't need mirror in my usecase. optane highly unlikely to die, very seldom sync writes.
cache drives for the rest of data on the drives.
all these drives are namspaces from 3 2TB u.2 drives.
special drives cant be removed, they are vdev, vdevs cant be...
these ereports seems to be more related to my ssds.
ie
eb 24 2024 11:45:50.454986717 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x36c310e32b204801
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool =...
Feb 18 2024 20:04:27.144891454 ereport.fs.zfs.zpool
Feb 18 2024 20:04:36.344772850 ereport.fs.zfs.zpool
Feb 18 2024 20:04:46.620640371 ereport.fs.zfs.zpool
Feb 18 2024 20:04:56.864508308 ereport.fs.zfs.zpool
Feb 18 2024 20:05:07.108376243 ereport.fs.zfs.zpool
Feb 18 2024 20:05:16.328257381...
Yes both special and l2arc, should not be that relevant, but now I'm looking to increase the per disk drive seq read speed, currently at 30 MBps
Get it closer to the per disk write speed of 150MBps.
The 10tb hgst sas drives can peak at 200 .
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.