Whats the most comfortable way to unlock encrypted ZFS ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Dec 3, 2020
46
12
8
Hey guys,

i am planning to use a ZFS based file system in the near future. At the moment, i am testing TrueNas and Napp-IT (recommendations for other good ZFS-base NAS-OSs are warmly welcome :) ).

What i mentioned is, that unlocking of encrypted filesystems is not very comfortable.

Is there way to deal with encrypted storages the easy and comfortable way? I mean, is there api or something like that to remotely unlock a zfs storage? I mean something like URL-based unlocking function (open a url with user/password/zfs parameters)? Or a other kind of api to unlock storage remote?

It would be realy nice to open my keepass as "single signe on" and use the entrys to unlock ZFS by clicking on url or open a attached script.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Hey guys,

i am planning to use a ZFS based file system in the near future. At the moment, i am testing TrueNas and Napp-IT (recommendations for other good ZFS-base NAS-OSs are warmly welcome :) ).

What i mentioned is, that unlocking of encrypted filesystems is not very comfortable.

Is there way to deal with encrypted storages the easy and comfortable way? I mean, is there api or something like that to remotely unlock a zfs storage? I mean something like URL-based unlocking function (open a url with user/password/zfs parameters)? Or a other kind of api to unlock storage remote?

It would be realy nice to open my keepass as "single signe on" and use the entrys to unlock ZFS by clicking on url or open a attached script.
Okay, I want you to think about what it means to have a "comfortable", or "easy" way to unlock a zfs volume. A zfs volume (or pool, or whatever you want to call that bit bucket) is locked (or not accessible) until you issue a mount command specifying a keyfile or a keyphrase, in which case the volume is unlocked, and while it's unlocked/mounted, your data is only protected by ACL rules or whatever secondary procedures (like getting a Kerberos session token from LDAP/AD to map a volume, maybe via SMB, or perhaps NFS) that you might've put in place to restrict access. The important concept here is that zfs encryption protects data-at-rest, not data-in-flight.

To use a bank analogy, the bank vault is physically secure during evening/weekend/holiday hours, but to transact business at the beginning of the workday, the bank manager and vice-manager walks into the vault, twists the knobs, retina scans, key in the password, twists the knobs, and opens it up for the day, once it is open, the contents of the vault is only secure by the armored double doors, gates, man-traps and other layered security procedures that are in place. The bank managers only unlock the vault once every workday and presses a button to lock everything back up at the end of the day.

For most people on a NAS, the need to unlock the volume only happens when you need to mount the volume, which happens probably only after a reboot or very rarely (unless you have some scheme where you unmount/lock the volume after every I/O operation). So, let's for the sake of argument, we torture the bank analogy further - the vault requires a retina scan and an 8 digit key to authenticate, and then the bank manager twists the knob to open up the vault, which contains deeds, bullion and valuables valued at hundred of millions of dollars.

Let's just say that a particular manager took a high resolution photo of his own retina on his smartphone and then use a hidden USB port built into the side of the vault so he can grab a USB cable from his pocket, plug his phone in and send he passcode sequence on the phone directly to the keypad on the vault. So instead of bending down every day and submitting to a retina scan (which takes time) and keying in the code (which only he knows), he pushes the brightness level on the phone up, loads the retina scan photo on the phone up, plugs the phone into the USB port, and then he's in. Twist the knob and voila, he is in.

So the question now is....how much do you trust that bank vault, and how much can you trust the manager who came up with this scheme?
Or switching back to the "more comfortable" scheme, how much would you trust the user/password scheme on the user interface, where it stores the keyphrase/keyfile, and how it transmit the information to whatever it is which will receive this information and unlock/mount this zfs volume for you?
 
Dec 3, 2020
46
12
8
I of course trust the bank vault (which is the ZFS encryption system). I also trust the manager (becaus its me). And i also trust the scheme (which means opening the web ui of the bank vault / NAS-system, typing in the pass phrase over a secured https-connection, from my own pc.

In your bank analogy you assume, that i am a customer who must trust the bank. But your analogy is wrong i this point. In reality, i am the bank owner, in my own bank building, with a bank vault i trust in and a unlock scheme, which is more or less transparent.

I can see no difference between

-> opening the browser on my own PC in my own network, brows to my Napp-IT web-ui, unlock every pool one after the other by typing in (or copy) the keyphrase for every locked pool/filesystem

vs.

-> open and unlock my keepass (for excample) and let it do the whole unlock process fully automated over a https encrypted URL, or a tls encrypted api, or even a encrypted ssh shell connection (even a unencrypted conntection would be fine if it is a local connection).

(vs.)

This automated scheme with keepass can also be a different, comfortable, also transparent and secure scheme. And thats my question. Whats the most comfortable way, espacially if you have more than one pool/filesystem? Are there any plans to optimize the unlock process? Maybe by adding FIDO2 APIs?


I know that in general and in a corporate environments, a NAS system is running 24/7 for weeks, months or years and reboots are rare. This means, that unlocking of a zfs pool/filesystem is also rare. But it might be possible, that NAS systems in a home environment might get shutdown or rebooted more often than in a corporate environment - and in this case, unlocking the pools over a a web interface one after the other is not very userfriendly.

Let me explain you how i dealed with encrypted data since win2k:

I am a veteran in using veracrypt (used it since one of the intial releases of truecrypt). Before truecrypt, i used utimaco safeguard easy and other full system encryption software (i never used bitlocker btw ;-) ). The advantage of full system encrytpion is, that it comes with a preboot authentication which unlocks my OS drive and also all attached encrypted datadrives during the boot process. Voila, this is what i call a comfortable and also secure way of using encryption. Its just like some kind of single signe on. With full system encryption, sharing the files over a smb share was also not a big deal.

Of course, this method has a lot of disadvantages. For example, its difficult to organize your data if you have a lot of TB-sized HDDs with encrypted containers on it, which wants to be organised and backuped. Also, the filesystems which can be used with this kind of encryption are limited to those filesystems which do not support self healing and all the good stuff that ZFS supports natively.

With the release of TrueNAS 12, i noticed that ZFS supports a matured implementation of native encryption. This was the initial point where i started to build a new homeserver.

One of the things i am not very happy with is the scheme to unlock ZFS, because i have to unlock every pool one after the other.
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I of course trust the bank vault (which is the ZFS encryption system). I also trust the manager (becaus its me). And i also trust the scheme (which means opening the web ui of the bank vault / NAS-system, typing in the pass phrase over a secured https-connection, from my own pc.

In your bank analogy you assume, that i am a customer who must trust the bank. But your analogy is wrong i this point. In reality, i am the bank owner, in my own bank building, with a bank vault i trust in and a unlock scheme, which is more or less transparent.

I can see no difference between

-> opening the browser on my own PC in my own network, brows to my Napp-IT web-ui, unlock every pool one after the other by typing in (or copy) the keyphrase for every locked pool/filesystem

vs.

-> open and unlock my keepass (for excample) and let it do the whole unlock process fully automated over a https encrypted URL, or a tls encrypted api, or even a encrypted ssh shell connection (even a unencrypted conntection would be fine if it is a local connection).

(vs.)

This automated scheme with keepass can also be a different, comfortable, also transparent and secure scheme. And thats my question. Whats the most comfortable way, espacially if you have more than one pool/filesystem? Are there any plans to optimize the unlock process? Maybe by adding FIDO2 APIs?


I know that in general and in a corporate environments, a NAS system is running 24/7 for weeks, months or years and reboots are rare. This means, that unlocking of a zfs pool/filesystem is also rare. But it might be possible, that NAS systems in a home environment might get shutdown or rebooted more often than in a corporate environment - and in this case, unlocking the pools over a a web interface one after the other is not very userfriendly.

Let me explain you how i dealed with encrypted data since win2k:

I am a veteran in using veracrypt (used it since one of the intial releases of truecrypt). Before truecrypt, i used utimaco safeguard easy and other full system encryption software (i never used bitlocker btw ;-) ). The advantage of full system encrytpion is, that it comes with a preboot authentication which unlocks my OS drive and also all attached encrypted datadrives during the boot process. Voila, this is what i call a comfortable and also secure way of using encryption. Its just like some kind of single signe on. With full system encryption, sharing the files over a smb share was also not a big deal.

Of course, this method has a lot of disadvantages. For example, its difficult to organize your data if you have a lot of TB-sized HDDs with encrypted containers on it, which wants to be organised and backuped. Also, the filesystems which can be used with this kind of encryption are limited to those filesystems which do not support self healing and all the good stuff that ZFS supports natively.

With the release of TrueNAS 12, i noticed that ZFS supports a matured implementation of native encryption. This was the initial point where i started to build a new homeserver.

One of the things i am not very happy with is the scheme to unlock ZFS, because i have to unlock every pool one after the other.
Yeah, but you are also missing the most important point(s) here:

Without anything else being done, the contents of the encrypted zfs volumes in your NAS can only be considered secure when you are not using it. Once you insert/unlock/mount the drive(s) and until you unmount/lock/eject them, it’s not secure.
See the paradox?

As you have pointed out, most people would not reboot a NAS (mine runs for months and the ones at my workplace runs for years), and most admins that I know of do not mount/unmount volumes on their NAS regularly. Once the NAS is up, the volumes are mounted, and other means are used to (maybe) authenticate/authorize/audit, (maybe) secure and provide data in flight, may it be SMB, NFS, Appleshare, iSCSI, whatever.

How often would you need to mount/ummount NAS volumes? Are you planning to only mount a volume on demand, copy data in/out, and then immediately unmount it? Do you do it enough times per day to warrant making it “easy and comfortable”? To use the bank analogy further, are you going to be that bank manager who is going to open the big bank vault door every time someone needs to go in/out and then close it immediately? And is this constant parade of people demanding you to open the big bank vault so tedious that you want ways to make it easier on yourself?

The locking/unlocking mechanism is but a small part of a (hopefully) vigorous, multi-layer security regimen, and one that is usually considered an edge case. On a zfs encrypted volume the protection comes if a burglar breaks into your home/office, powers off the NAS and runs away with the disks. It doesn’t do anything if someone is able to sniff network data in flight, or if you let anyone get into the WiFi and map to anything they desire.

Like what I mentioned in the bank example, for any bank you go to, just because the vault is open doesn’t mean the contents aren’t secure, or just because the vault is closed and locked doesn’t necessarily mean it is guaranteed secure. Banker don’t lay all of their trust on the vault - they also have construction reinforcements, man-traps, multiple armored gates, armed guards, cameras, sensors, audit logs and staffing/egress procedures to ensure that known risks are constantly being handled. Trust, but verify.

An encrypted volume in a NAS, once unlocked/mounted, can be setup to implicitly trust anyone connected to it with a guest password, and send bits through plaintext. It can also be set to log any and all activities, requiring authentication with an MFA module before allowing file service, force auto-dismount after an idle time is reached, use encrypted data channels for data data in flight or require the auth traffic and the file server traffic to originate/terminate at specific and separate VLANs to function.
In the first case, the security offered...is illusory, and in the second case, the seemingly over-complicated scheme handles both data security-at-rest and in-motion.

Unfortunately, I’ve seen assumptions of security fail in my career - a small business owner’s son use bitlocker on his desktop to keep billables there. The drive is SED and requires a password on power-up. The desktop has a TPM module and bitlocker auto-unlocks base on the key stored in its secure realm. He uses a fingerprint scanner embedded on the keyboard and a relatively long password on his local account.

He also let his fiancée have an account on the machine (so she can login and do some online shopping while he’s dealing with customers) and the password she set was trivial. To compound that issue, both accounts have local admin privileges (who knows why he made the account for her that way), so the scumbag future brother-in-law was able to shoulder surf her while he’s gone, and while she’s indisposed, login as her and browse his files from her account to get to the poor guy’s bank info (actually, as I remember correctly her idle timeout was set to 15 while he was on 3 or 5, so the password might not be needed).

He ended up swallowing about mid-5 digits in monetary losses, and certain...interests got involved with disciplinary action against the poor slob’s brother-in-law. Let's just say that he had to schedule 2 family gatherings that year.

Does that mean that the security scheme failed?
Not really. The question is about how the scheme failed and what we can do to strengthen the scheme.

If the machine was stolen and someone powered it up, unless the fingerprint matches and the login password was known, it’ll be neigh impossible to read the contents of the bitlocker drive. Yanking the drive for offsite analysis won’t help either, the SED blocks access until the power-on password is typed in, it wouldn’t have the bitlocker unlock key stored in the TPM, and removing the platter for analysis won’t help very much either. If the machine was setup to enforce password complexity, biometrics and a short idle timeout period for every account, or new accounts are not configured as local admins, then the fiancee’s account would not have been such easy low hanging fruit to compromise data-in-flight. If the guy was smart enough to make a veracrypt container on the drive for the billables/bank info, only mounted as needed (the drive auto-unmount with an auto-timeout), then the fact that the future missus has an easily shoulder-surfed account password and has the implicit right to see someone else’s user content might not have been the issue that ending up costing him a few questionable electronic bank transfers. Good defense is done in-depth and with a holistic approach to risk.

I didn’t even bother with volume encryption on the work NAS - the NAS sits in a locked and armored server room with 24/7 surveillance, cloud storage and egress logs, there’s no need to repeatedly mount or dismount volumes on the NAS. To mount a given fileshare for the finance department the user has to be in the AD group for finance, has to be in a file share where he/she has privileges, the NAS uses SMBv3 (which encrypts data in flight), the client machine must be joined to AD with group policies pushed (no trivial passwords, passwords aging, no short password history, machine enforces workstation lock-on-idle after a few minutes), the machine must be compliant with Microsoft Intune (windows 10 Pro, recently patched, antimalware reports good state, firewall is up and running, no unauthorized software detected, TPM up, bitlocker is up and at AES-256) the connectivity is done using Ethernet with port level authentication via 802.11x and the user’s AD credentials, and only for specific network ports in the finance department. The auditors might suggest NAS volume encryption, but they are comfortable with the other steps made to secure the data and signed off on the scheme.

Even for home you can enable a scheme to protect your data, like only making certain volumes are only available from a specific net mask/WiFi ssid, only allowing sshfs on that volume, and triggering server-side dismount-on-idle.
 
Last edited:
  • Like
Reactions: BoredSysadmin

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
On Linux and Illumos/OmniOS native ZFS encryption is available since early summer last year. On Free-BSD with OpenZFS 2.0 ZFS encryption is now also available as a ZFS filesystem property. Unlike other methods that are based on encrypted disks you can have a different key per filesystem and you can replicate locked encrypted filesystems to a backup system where you can open the filesystem with the original key.

Oracle Solaris has ZFS encryption since 2010 in native ZFS. The OpenZFS implementation is based on the same bits from OpenSolaris but lacks the Solaris features keyserver or unlock on bootup. Such features like autounlock, keyserver, splitted keys or lock/unlock via SMB are not part of OpenZFS but must be implemented in a management solution like I have done in current napp-it, see http://napp-it.org/doc/downloads/zfs_encryption.pdf

The "api" for zfs encryption is the zfs command
illumos: manual page: zfs.1m (Open-ZFS) or
Synopsis - man pages section 1M: System Administration Commands (native ZFS)
 
Last edited:

Tinkerer

Member
Sep 5, 2020
83
28
18
If you generate a raw key like this, dd if=/dev/urandom of=/some/location/for/dataset.key bs=1 count=32, you can then create a dataset like this:
Code:
zfs create \
    -o encryption=aes-256-gcm \
    -o keylocation=file:///some/location/for/dataset.key \
    -o keyformat=raw \
    tank/dataset
The keylocation is a key-value based property on a zfs dataset that will be configured upon creation when those parameters are included. If set, the zfs load-key command will know where to look for a key. The zfs mount command can also load keys when instructed to do so with -l.

The next step would be to change the way zfs mounts its datasets on boot time. On Linux with systemd, you could for example copy or edit the zfs-mount.service unit file to include the parameter -l to automatically load keys.

Obviously, encrypting datasets like this will have no protective value whatsoever if /some/location/for/dataset.key is on an unprotected location. If those would be stored the same encrypted dataset the key is for, you'll end up locking yourself out (chicken-egg story).

So you'd need a way to protect the keys while still prodiving you with a level of convenience you are comfortable with. This is the tradeoff you need to think about.

One way could be to store the keys on an encrypted dataset or an encrypted LVM that you need to manually unlock during boot using a passphrase where all the other keys are stored so those can be loaded and encrypted datasets unlocked.

I have done something similar like that myself, I use a simple wrapper script to new create datasets. It automatically generates an encryption key and stores it somewhere safe. From there it gets picked up and stored off-box using another form of encryption. I made sure I have 3 ways to get my encryption keys back.

Or you could store them on a usb device which you make available during boot time and remove afterwards.

Each of these options have pro's and con's and you just need to weigh your options. All I will say is this: make damn sure you keep your keys or passphrases safe from prying eyes without loosing access to them yourself. You wouldn't be the first to start encrypting his data and loosing access to it. There is no way back, once the keys are lost, your data becomes permanently inaccessible. Think about the backup of keys, storage and test whatever scheme you come up with. Make sure that works before encrypting important stuff.