I am also suffering from this console output!!
It's a new box which had r151038ay & napp-it pro 22.03 installed, and
received zpool from an old omnios r151030
The box is used for File Server
after :
- idmap add winuser:administrator@XXXXXXXX.com unixuser:root
- idmap add...
I've encountered the same issue on my home-NAS-ALL-in-one-box again,
upgrade to v5000 can't fit it.
there are some people discussing the same symptom at ZFS on linux: zpool remove on mirrored logs fails silently · Issue #1422 · zfsonlinux/zfs · GitHub zpool remove <pool> <log device> returns...
After I upgrade the zfs version to 5000,
adding another SSD , and upgrade SLOG to mirror device,
the stuck syndrome seems disappeared.
SLOG is still removable after 1.5TB of zfs send.
I will try to transfer more data by nfs , to see if it will stuck.
During the zfs send , I use
root@zfs01: zpool iostat -v lfpool 5 | grep c6t4d0
to monitor the iostat of slog
But there are only 8 times of activities during the 10476 lines of log
c6t4d0 8K 111G 0 0 9 54
c6t4d0 8K 111G 0 0 9 54...
The original configuration of the slog is an PLEXTOR 256M5-M2.2280 on an a LSI 1064E,
I change to intel SSD to make sure it's not a plextor firmware problem @@ .
I Will try to update to pool to V5000 later.
no specific reason for the ZFS V28
but I thought it would more easy if we want to migrate to other ZFS platform ,
like FreeBSD or ZFSonLinux ( just in case ) ?
Here Are the information from iDrac
iDrac/Storage/Controllers/Health and Properties
Name PERC H310 Mini (Embedded)
Device Description Integrated RAID Controller 1
PCI Slot Not Applicable
Firmware Version 20.13.2-0006
Driver Version...
Again,
Recreate the zpool, this time with all Omnios default.
root@zfs01:/# zpool status
pool: lfpool
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool...
I've encounter the same issue on my home media storage @@
It's an all-in-one ESXi 5.5U2 on ASUS TS100/E7-PI4
with 2 X LSI2008 pass-through to Omnios
SSD is Crucial_CT120M50 + PLEXTOR PX-128M6
after revert the sd.conf to OS default and rebuild the zpool,
with zfs_nocacheflush=1 , the slog still get stuck after 1.8T zfs send by nc + 0.7T nfs write.
will set zfs_nocacheflush=0 as OS default , then rebuild the zpool again .
I am using zfs providing NFS mounts for ESXi Servers ,
so I think enable sync write is a must @@
that's why I am struggling to find out what happend to SSD slog @@
YES, I DID modified sd.conf.
I did revert it to OS default yesterday , update_drv , reboot , then rebuild the zpool
Will try to...
YES, I did modify sd.conf.
will delete it after destroy the zpool.
But there is no powerloss the all time .....
OR can you give me any recommendation for SSD ?
I've tried Plextor M5Pro / Plextor M6(M2.2280) / Intel 535
All of them get stuck.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.