ZFS encryption at rest!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
2018 is the year of Open-ZFS on
BSD, Linux, OSX, Solarish/Illumos, probably Windows

ZFS encryption based on the last OpenSolaris bits is nearly ready to be available on ZoL and other Open-ZFS, see 8727 Native data and metadata encryption for zfs by lundman · Pull Request #489 · openzfs/openzfs · GitHub ) . Encryption is a strict requirement in the light of the new EU data security rules from may on where using state of the art techniques are mandatory to secure data,

we will see propably later this year

- removal of vdevs
- add disks to vdevs (ex 6 disk Z2 -> 7 disk Z2)
- sequential fast resilvering
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
2018 is the year of Open-ZFS on
BSD, Linux, OSX, Solarish/Illumos, probably Windows

ZFS encryption based on the last last OpenSolaris bits is nearly ready to be available, see 8727 Native data and metadata encryption for zfs by lundman · Pull Request #489 · openzfs/openzfs · GitHub ) . Encryption is a strict requirement in the light of the new EU data security rules from may on where using state of the art techniques are mandatory to secure data,

we will see propably later this year

- removal of vdevs
- add disks to vdevs (ex 6 disk Z2 -> 7 disk Z2)
- sequential fast resilvering
Very exciting. ZFS will be my FS of choice for a long time to come it seems. Love it.

On an off topic note: what are your thoughts on BFS from beOS. You can play with it in the haiku os alpha. It’s a really cool database like fs which makes searching for things really easy.


Sent from my iPhone using Tapatalk
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
ZFS development started around 16 years ago, propably furthered due an incident when the leading German webhoster at that time lost many of Germany's websites after a week or so being offline. Data was stored on Sun devices, the state of the art storage these days.

Windows tries to adopt ZFS features with ReFS but with a bad performance and handling. Btrfs tries the same and wants to keep some Linux flexibility but is behind ZFS in nearly all aspects.

I do not know BFS but its hard for a FS to compete, nearly impossible when it comes to production readyness.
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I am really glad to see open ZFS get all the new features, it was always a worry it was being left behind.
I don’t think IBM’s jfs2, Microsoft’s NTFS & ReFS, or Linux xfs,btrfs,ext4 are going away but ZFS generally has it over all of them for features and robustness, on the other hand would love to see it support virtual devices some how better, we live in a virtual and pass through world these days and don’t always want to dedicate Controllers and disks to ZFS
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
What do we gain from removal of vdevs?
i think its about shrinking, i.e. converting a 5 Disk z1 to a 4 Disk z1, what can be handy when re-structuring (or after mistakes on the cli)

edit:
or is it about removal of vdevs from a stripe, and so shrinking the stripe by one or more vdevs?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I would like to see vdev removal to help with mistakes. People seem prone to adding a single drive vdev when trying to add to a pool. And since you can't remove it, you're kind of stuck. Particularly if you are using raidz. Mirror users can just add a mirror drive.

It's really nice to see OpenZFS getting the new features.
 
  • Like
Reactions: gigatexal

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
yes, think a lot of irrepeatable mistakes happen when expanding. the cli-syntax somehow seems to promote them.

i would love to see per pool ARC usage Limit and a way to sync-mirror two nodes over network and active/passive failover, sort of what drbd can do but for zfs.
 
  • Like
Reactions: gigatexal

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
vdev removal and vdev expansion (by single disks) combined gives all the flexibility to reorganise a pool, example to go from a dual vdev z2 pool with 6 disks each to a single vdev z2 pool from 10 disks without destroying the pool

For ZFS HA, you can either use RSF-1 from high availability.com as a commercial product, use Pacemaker on Linux or use ZFS itself for failover service management as services like NFS or SMB are simple ZFS properties (at least on Solarish) that switch automatically on a pool failover. I am testing the last one as a simple HA solution for a network mirror of 2 iSCSI LUNs, see http://www.napp-it.org/doc/downloads/z-raid.pdf
 
  • Like
Reactions: _alex and gigatexal

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
is there or was there any talk to update ZFS to take advantage of SSDs and NVMe storage?
Are you thinking something better than just running straight pools without the log/cache (other than ram) ? Not really optimised but also not inefficient either, certainly it’s simply.
(Note to self, have a dozen SAS SSD sitting around yet to be used I should do some testing to see what I can do in terms of maximum performance)
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Are you thinking something better than just running straight pools without the log/cache (other than ram) ? Not really optimised but also not inefficient either, certainly it’s simply.
(Note to self, have a dozen SAS SSD sitting around yet to be used I should do some testing to see what I can do in terms of maximum performance)
Oh that’s how I would run then too: just as pools of fast storage but I was more wondering if there was a rethink in mind to take advantage of the fundamental differences between the rotating rust and the SSDs.

More just curious than anything else.


Sent from my iPhone using Tapatalk
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Oh that’s how I would run then too: just as pools of fast storage but I was more wondering if there was a rethink in mind to take advantage of the fundamental differences between the rotating rust and the SSDs.

More just curious than anything else.


Sent from my iPhone using Tapatalk
In general with all flash the approach seems to be some decent processing power behind the array and lots of de-dupe and compression to make the most of the space, I don’t have much idea what the vendors run in the background except netapp who use they ontap OS but would be interesting to see.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
In general with all flash the approach seems to be some decent processing power behind the array and lots of de-dupe and compression to make the most of the space, I don’t have much idea what the vendors run in the background except netapp who use they ontap OS but would be interesting to see.
that makes sense.

Off topic: Years ago I was at a Pure Storage marketing event and talking to one of the sales/engineers and mentioned since they were using 850 pro's at the time what they were doing about failure rates and the like and mentioned ZFS and the guy balked: "we'd never use a COW system it's too slow!". Apparently they have custom firmware they put on the SSDs to talk to their OS to do their magic.