Encrypted ZFS child Filesystem not mounted - napp-it

Bronko

Member
May 13, 2016
96
7
8
101
Hi,

I have created ZFS Filesystems as shown below (status after reboot):

Screenshot from 2020-03-27 17-42-12.png
It looks fine here, but both 'tank2/Data/Movies' and 'tank2/Data/Test' child data sets (encryption inherited) are NOT mounted after reboot, parent data set 'tank2/Data' is mounted, ''tank2/Testnoenc' too.
Directly after creation of 'Movies' and 'Test' (without reboot) they were mounted and accessible, ex. via SMB.
Manual lock and unlock of parent data set 'tank2/Data' doesn't changed it.

'tank2/Data/Movies' was an replication job
'tank2/Data/Test' directly created

Any hint for me?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
I have not played through every option but the behaviour with nested encrypted filesystems should be that you lock/unlock the parent folder and this locks/unlocks all child systems.

I have just created an encrypted parent and two childs below this with OmniOS and napp-it 20.01 and it works like this.
 

Bronko

Member
May 13, 2016
96
7
8
101
Encryption works right in this way, but the child data sets are not mounted, while the parent data set it is mounted after unlocking by reboot or manually.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
So the child filesystem is unlocked but not mounted automatically when you unlock the parent?

I have not seen this behaviour. When I unlock the parent, the childs are unlocked and mounted. Is this related to the replicated filesystem ?
 

Bronko

Member
May 13, 2016
96
7
8
101
Yes, exactly.
To exclude a replication issue I've created a second child data set 'tank2/Data/Test' as described above, means directly; same behavior.

Here are the mount point circumstances after every reboot or lock/unlock of parent data set:
Code:
# zfs get -r canmount tank2
NAME               PROPERTY  VALUE     SOURCE
tank2              canmount  on        default
tank2/Data         canmount  on        default
tank2/Data/Movies  canmount  on        default
tank2/Data/Test    canmount  on        default
tank2/Testnoenc    canmount  on        default
Code:
# zfs list
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
....
tank2                                             5.30T  5.27T   104K  /tank2
tank2/Data                                        5.30T  5.27T   264K  /tank2/Data
tank2/Data/Movies                                 5.30T  5.27T  5.30T  /tank2/Data/Movies
tank2/Data/Test                                    216K  5.27T   216K  /tank2/Data/Test
tank2/Testnoenc                                    120K  5.27T   120K  /tank2/Testnoenc
Code:
# zfs mount
...
tank2                           /tank2
tank2/Testnoenc                 /tank2/Testnoenc
tank2/Data                      /tank2/Data
Code:
# df -h
...
tank2                 10.6T   104K      5.27T     1%    /tank2
tank2/Testnoenc       10.6T   120K      5.27T     1%    /tank2/Testnoenc
 
Last edited:

Bronko

Member
May 13, 2016
96
7
8
101
Try to mount manually:
Code:
# zfs mount tank2/Data/Movies
# zfs mount tank2/Data/Test
Yes, data sets mounted and data available locally:
Code:
# zfs mount tank2/Data/Movies
...
tank2                           /tank2
tank2/Testnoenc                 /tank2/Testnoenc
tank2/Data                      /tank2/Data
tank2/Data/Movies               /tank2/Data/Movies
tank2/Data/Test                 /tank2/Data/Test
I'm aware about these behavior in case the mount point of a data set isn't empty, but it is....

But data sets are not available via SMB, although it is configured, check screenshot above (update: check information below)

Next lock/unlock of parent data set or a server reboot results in an unmounted encrypted child data set(s).
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
So the problem is related zo SMB only

This is OmniOS?
Have you set traverse_mounts=true (napp-it Services > SMB > Properties)

Solaris does not allow to travers from a parent to a child filesystem via SMB as this can give problems due different possible ZFS properties.
 

Bronko

Member
May 13, 2016
96
7
8
101
Sorry, and yes, I'm on omnios-r151032-19f7bd2ae5

But it is not the case in my opinion:
Code:
traverse_mounts

           The traverse_mounts setting determines how the SMB server presents
           sub-mounts underneath an SMB share.  When traverse_mounts is true
           (the default), sub-mounts are presented to SMB clients like any other
           subdirectory.   When traverse_mounts is false, sub-mounts are not
           shown to SMB clients.
illumos: manual page: smb.4

Nevertheless I've testes it and 'traverse_mounts=false' didn't changed anything.

From my point of view the issue is at first the problem of not mounted ZFS encrypted child data sets at boot or lock/unlock.

Manual 'zfs mount' works, and SMB share is available after 'svcadm restart svc:/network/smb/server:default'

Any further assistance would be very welcome...
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
Ok I understand the problem and confirm a bug in SMB handling of nested encrypted filesystems.
Please download newest 19.12 or 20.x and retry
 

Bronko

Member
May 13, 2016
96
7
8
101
Done and it seems to work:
Screenshot from 2020-03-28 20-22-34.png

'tank2/Data/Test': mounted and shared via SMB as expected
'tank2/Data/Movies' (replicated data set): still unmounted
'tank2/Data': is mounted as before, but for now shared by name 'off' and traverse_mounts are visable
Screenshot from 2020-03-28 20-35-45.png

Then I click on 'tank2/Data' SMB value 'off' and can directly share as Data (default)
Screenshot from 2020-03-28 20-47-45.png
(Movies still empty, for sure)
After disabling (off) SMB value for 'tank2/Data/Movies' and re-enable SMB share (Movies) data set is mounted and available as SMB share 'movies'
Disabling 'tank2/Data' SMB share 'data' is possible (realy 'off' now) and all SMB shares working as expected by first screenshot at this post.

Problem: after reboot it starts in same manner as described at this post.

@gea I'm patient for further tests...
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
Seems a little timing critical (automount is not in Open-ZFS) is it worked on some folders, not on others.

Please re-download. I have added a service refresh.
It can last a few seconds after bootup until the encrypted filesystems become visible via smb.
 

Bronko

Member
May 13, 2016
96
7
8
101
Done, but same as before...
Could it related to lock/unlock option: 'Remember SMB share' ?
The former state of an SMB share can be restored from a user defined ZFS property.
Time to delete data set 'tank2/data' , recreation and start new replication from old storage?
Idea is to replicate data set from old to new storage and from unencrypted data set to encrypted data set.
 

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
The complicated thing are the nested filesystems paired with encryption (although unlock +shares enabled after reboot works in my setup for the parent and the child filesystems) If you avoid the nested filesystems everything becomes simple

You can encrypt during receive
zfs send pool/unencrypted@base |zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///pfad/zur/passphrase pool/encrypted

btw
Napp-it disables smb shares prior lock and re-enable them after unlock to avoid "hanging" smb connections. This is where you need the remember.
 
Last edited:

Bronko

Member
May 13, 2016
96
7
8
101
It works for directly (local) created child data set too, only the in-replicated child data set shows behavior above, as you know.
Since napp-it "doesn't support" to add encryption option for target data set in job replication, I will start next run by CLI as you suppose without child (nested) data set on target system.
Further development and tests on my side warmly welcome... ;-)
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
You should go to a situation where the source filesystem is encrypted and you can use raw send to replicate to an encrypted destination. In such a case, the destination can even be in locked state during replication.
 

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
The migration should happen on source side to replcate then encrypted.

btw
Have you removed the readonly ZFS attribute of the replicated fs as this may hinder a sharing.
 

Bronko

Member
May 13, 2016
96
7
8
101
The migration should happen on source side to replcate then encrypted.
Beside sending trough a ssh tunnel there should be no difference?
Next problem, have no space on source or target side to make migration locally... ;-/

btw
Have you removed the readonly ZFS attribute of the replicated fs as this may hinder a sharing.
Yes, but stumbled in on first SMB ACL and writing tests after replication... took me an hour ... ;-)
 
Last edited:

Bronko

Member
May 13, 2016
96
7
8
101
Currently I'm playing with different send and recv option on target side...

Ok I understand the problem and confirm a bug in SMB handling of nested encrypted filesystems.
Please download newest 19.12 or 20.x and retry
Done as already mentioned, but find a bug on displaying 'ZFS Filesystems'

Screenshot from 2020-04-02 15-16-47.png
Bug: starting by column 'SOURCE' there is an offset of -1.

...
btw
Napp-it disables smb shares prior lock and re-enable them after unlock to avoid "hanging" smb connections. This is where you need the remember.
This behavior seems the reason why the parent encrypted data set would be SMB enabled (while its disabled) after any lock/unlock (manually or reboot) in present of an SMB enabled child data set. Short version: a child SMB enabled data set enables parent SMB automatically after lock/unlock.

Bug: In case of manually lock/unlock the drop down menu of 'Restore SMB share' shows 'off' or 'no'. Whatever is chosen, you will get an SMB share by name 'off' (mentioned above too).
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
SMB share management of nested encrypted filesystems needs some work and testing prior an update.