Can't remove SLOG device

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
With ZFS all writes are done over a rambased write cache for best performance.
On a powerloss or crash, all writes in the cache are lost. You can then enable sync write. This is an additional logging of last writes, think of it like a BBU on hardwareraid. As disks are quite slow with small random writes, performace can fall to a fraction of normal cached write performance.

This is where an Slog can help if it is optimised for small random writes . It must offer powerloss protection or it cannot guarantee last writes.

One of the cheapest SSDs that is really suited for an slog is an Intel S3700 (100 or 200GB).
see Top Picks for napp-it and OmniOS ZIL/ SLOG Drives

btw
have you modified sd.conf for the SSD or not?
The behaviour that you have seen is absolutely untypical. Adding/ removing of an Slog should be troublefree on Open-ZFS.
 
  • Like
Reactions: pricklypunter

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
I agree, if it had been something buggy with either ZFS or Napp-IT/ Omnios, it would be getting reported all over the place. I have found nothing after several searches of the "big library" that would suggest this being an issue. I think it really has to be unique instance/ setup or corruption of some sort to blame for this behaviour, but I don't know enough about it all to be of much use here :)
 

wallenford

Member
Nov 20, 2016
33
3
8
50
One of the cheapest SSDs that is really suited for an slog is an Intel S3700 (100 or 200GB).
see Top Picks for napp-it and OmniOS ZIL/ SLOG Drives

btw
have you modified sd.conf for the SSD or not?
The behaviour that you have seen is absolutely untypical. Adding/ removing of an Slog should be troublefree on Open-ZFS.
I am using zfs providing NFS mounts for ESXi Servers ,
so I think enable sync write is a must @@
that's why I am struggling to find out what happend to SSD slog @@

YES, I DID modified sd.conf.
I did revert it to OS default yesterday , update_drv , reboot , then rebuild the zpool
Will try to recovery all zfs today
 

wallenford

Member
Nov 20, 2016
33
3
8
50
after revert the sd.conf to OS default and rebuild the zpool,

with zfs_nocacheflush=1 , the slog still get stuck after 1.8T zfs send by nc + 0.7T nfs write.

will set zfs_nocacheflush=0 as OS default , then rebuild the zpool again .
 

wallenford

Member
Nov 20, 2016
33
3
8
50
I've encounter the same issue on my home media storage @@

It's an all-in-one ESXi 5.5U2 on ASUS TS100/E7-PI4
with 2 X LSI2008 pass-through to Omnios

SSD is Crucial_CT120M50 + PLEXTOR PX-128M6
 

wallenford

Member
Nov 20, 2016
33
3
8
50
Again,
Recreate the zpool, this time with all Omnios default.

root@zfs01:/# zpool status
pool: lfpool
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support feature
flags.
scan: none requested
config:

NAME STATE READ WRITE CKSUM
lfpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d1 ONLINE 0 0 0
c1t1d1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c1t2d1 ONLINE 0 0 0
c1t3d1 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c1t4d1 ONLINE 0 0 0
c1t5d1 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c1t6d1 ONLINE 0 0 0
c1t7d1 ONLINE 0 0 0
logs
c6t4d0 ONLINE 0 0 0
cache
c4t62d0 ONLINE 0 0 0

errors: No known data errors

After 1TB data transfered with ZFS RECV by NC from another system,
(do nothing else during this period, and zpool iostat indicates barely no writes on slog when ZFS RECV)

the slog stucked, and zdb -c failed :

root@zfs01:/# zdb -c lfpool
assertion failed for thread 0xfffffd7fff142a40, thread-id 1: space_map_allocated(msp->ms_sm) == 0 (0x2000 == 0x0), file ../../../uts/common/fs/zfs/metaslab.c, line 1551
Abort (core dumped)​

This is so weird.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Your problem is so untypical
- Can you add some infos about your disk controller settings/ details (firmware mode, release)
- any reason for the ancient ZFS v28
 

wallenford

Member
Nov 20, 2016
33
3
8
50
Your problem is so untypical
- Can you add some infos about your disk controller settings/ details (firmware mode, release)
Here Are the information from iDrac
iDrac/Storage/Controllers/Health and Properties

Name PERC H310 Mini (Embedded)
Device Description Integrated RAID Controller 1
PCI Slot Not Applicable
Firmware Version 20.13.2-0006
Driver Version 6.503.00.00ILLUMOS
Cache Memory Size 0 MB

And the slog is attached with onboard sata
sata2/4::dsk/c6t4d0 connected configured ok Mod: INTEL SSDSC2BW120H6 FRev: RG21 SN: CVTR6241008Q120AGN disk​






 

wallenford

Member
Nov 20, 2016
33
3
8
50
here are the zpool history since I recreate the zpool :

2016-12-12.18:10:20 zpool create -o version=28 -O refreservation=512G lfpool mirror c1t0d1 c1t1d1 mirror c1t2d1 c1t3d1 mirror c1t4d1 c1t5d1 mirror c1t6d1 c1t7d1
2016-12-12.18:10:25 zpool add lfpool cache c4t62d0
2016-12-12.18:11:00 zpool add -f lfpool log c6t4d0
2016-12-12.18:11:10 zfs create -o compression=on -o sharesmb=name=vm.archive,guestok=true -o sharenfs=on -o casesensitivity=mixed lfpool/vm.archive
2016-12-12.18:11:11 zfs create -o compression=on -o sharesmb=name=vm.iso,guestok=true -o sharenfs=on -o casesensitivity=mixed lfpool/vm.iso
2016-12-12.18:11:11 zfs create -o compression=on -o sharesmb=name=vm.test,guestok=true -o sharenfs=on -o casesensitivity=mixed lfpool/vm.test
2016-12-12.18:11:11 zfs create -o compression=on -o sharesmb=name=software,guestok=true -o sharenfs=on -o casesensitivity=mixed lfpool/software
2016-12-12.18:11:11 zfs create -o compression=on -o sharesmb=name=erp.path,guestok=true -o sharenfs=on -o casesensitivity=mixed lfpool/erp.path
2016-12-12.18:11:12 zfs create -o compression=on -o sharesmb=name=stor.foreign,guestok=true -o sharenfs=on -o casesensitivity=mixed lfpool/stor.foreign
2016-12-12.18:11:12 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.erp.ap
2016-12-12.18:11:12 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.erp.db
2016-12-12.18:11:12 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.infra
2016-12-12.18:11:13 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.intron
2016-12-12.18:11:13 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.stor.01
2016-12-12.18:11:13 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.stor.02
2016-12-12.18:11:13 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.web.01
2016-12-12.18:11:14 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.ds
2016-12-12.18:11:19 zfs create -o compression=on -o sharenfs=on -o casesensitivity=mixed lfpool/vm.temp
2016-12-12.18:47:29 zfs recv -F lfpool/vm.iso
2016-12-12.19:18:21 zfs recv -F lfpool/vm.test
2016-12-12.19:22:02 zfs recv -F lfpool/vm.erp.db
2016-12-12.19:22:59 zfs recv -F lfpool/vm.web.01
2016-12-12.20:03:08 zfs recv -F lfpool/vm.temp
2016-12-12.20:46:35 zfs recv -F lfpool/vm.stor.01
2016-12-12.20:49:15 zfs recv -F lfpool/vm.infra
2016-12-12.20:59:09 zfs recv -F lfpool/vm.stor.02
2016-12-12.21:15:20 zfs recv -F lfpool/vm.intron
2016-12-12.21:33:05 zfs recv -F lfpool/vm.erp.ap
2016-12-13.00:33:23 zfs recv -F lfpool/vm.ds
2016-12-13.08:58:46 zpool offline lfpool c6t4d0
2016-12-13.08:58:52 zpool remove lfpool c6t4d0 <====== after that, zpool status still show slog device
2016-12-13.09:03:57 zpool online lfpool c6t4d0
2016-12-13.09:05:28 zpool offline lfpool c6t4d0
2016-12-13.09:05:34 zpool remove lfpool c6t4d0
2016-12-13.09:10:55 zpool remove lfpool c6t4d0
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I see no special reason for this behaviour from hardware. I suppose your Sata is in Ahci mode. While Sata can give problems with hotplug it should not hinder a remove.

What you can try
- connect the slog to the LSI HBA.

- You should update to Pool v5000. V28 is only for compatibility with Oracle Solaris as this was the last common ZFS. Pool v5000 is compatible between OpenZFS (BSD, Illumos, OSX, Linux) and comes with a lot of enhancements and bugfixes.
 

wallenford

Member
Nov 20, 2016
33
3
8
50
What you can try
- connect the slog to the LSI HBA.
- You should update to Pool v5000. V28 is only for compatibility with Oracle Solaris as this was the last common ZFS. Pool v5000 is compatible between OpenZFS (BSD, Illumos, OSX, Linux) and comes with a lot of enhancements and bugfixes.
The original configuration of the slog is an PLEXTOR 256M5-M2.2280 on an a LSI 1064E,
I change to intel SSD to make sure it's not a plextor firmware problem @@ .

I Will try to update to pool to V5000 later.
 

wallenford

Member
Nov 20, 2016
33
3
8
50
During the zfs send , I use
Code:
root@zfs01:  zpool iostat -v lfpool 5 | grep c6t4d0
to monitor the iostat of slog

But there are only 8 times of activities during the 10476 lines of log
Code:
  c6t4d0       8K   111G      0      0      9     54
  c6t4d0       8K   111G      0      0      9     54
  c6t4d0       8K   111G      0      0     17     71
  c6t4d0       8K   111G      0      0     18     63
  c6t4d0       8K   111G      0      0     18     71
  c6t4d0       8K   111G      0      0     18     71
  c6t4d0       8K   111G      0      0    523  2.57K
  c6t4d0       8K   111G      0      1      0  43.6K

the other lines are all
Code:
  c6t4d0       8K   111G      0      0      0      0

That's an very interesting thing, that there are no IOPS , but caused Read/write Bandwidth
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
An Slog is not a write cache but a logging device that is only used in the very special case of transactional safe sync writes. On a zfs receive datastream or during a normal filer use the Slog is not involved unless you force sync write.
 

wallenford

Member
Nov 20, 2016
33
3
8
50
An Slog is not a write cache but a logging device that is only used in the very special case of transactional safe sync writes. On a zfs receive datastream or during a normal filer use the Slog is not involved unless you force sync write.
what confuses me is , why there is 0 IOPS, but still usage of bandwidth generated?
Code:
c6t4d0       8K   111G      0      0     17     71
c6t4d0       8K   111G      0      0    523  2.57K
 

wallenford

Member
Nov 20, 2016
33
3
8
50
After I upgrade the zfs version to 5000,
adding another SSD , and upgrade SLOG to mirror device,
the stuck syndrome seems disappeared.

SLOG is still removable after 1.5TB of zfs send.

I will try to transfer more data by nfs , to see if it will stuck.
 

wallenford

Member
Nov 20, 2016
33
3
8
50
Your problem is so untypical
- Can you add some infos about your disk controller settings/ details (firmware mode, release)
- any reason for the ancient ZFS v28
After rebuild pool v5000 and transfer 3TB of data , SLOG is still good .

SO I guess it would be a ZFS V28 issue.

Thanks GEA.
 
  • Like
Reactions: nle

wallenford

Member
Nov 20, 2016
33
3
8
50
I've encountered the same issue on my home-NAS-ALL-in-one-box again,
upgrade to v5000 can't fit it.

there are some people discussing the same symptom at ZFS on linux: zpool remove on mirrored logs fails silently · Issue #1422 · zfsonlinux/zfs · GitHub zpool remove <pool> <log device> returns 'pool already exists' · Issue #4270 · zfsonlinux/zfs · GitHub

So I decide to build a ubuntu VM, modify the code of ZOL as the discussion in the issues,then build the ZOL.

1. After import the zpool in the VM, I'd successfully offline the slog, then remove it.
2. Then I export the pool , bring up napp-it , and import it with zpool import -fm successfully. (remember to disconnect your slog device physically before import , or the import would be refused as "one or more devices is currently unavailable")
3. The stuck slog device is gone !!


***REMEMBER IT IS VERY DANGEROUS, ALWAYS BACKUP YOUR DATA***