Amazon S3 compatible ZFS cloud with minIO

gea

Well-Known Member
Dec 31, 2010
2,502
842
113
DE
Update
I have fixed a bug to allow more than 4 instances (shared ZFS filesystems) of minIO. You can also reduce CPU load of minIO compared to local storage services via nice . It is also now possible to store/backup access keys to a different filesystem, ex the one that you use for your encryption keys.

To sync and share files on minIO on the Internet you can use the included browser access or one of these tools: 5 Best Amazon S3 User interface tools | GUIs for Amazon simple storage
 
  • Like
Reactions: Patrick

gea

Well-Known Member
Dec 31, 2010
2,502
842
113
DE
minIO on OpenIndiana and Solaris 11.4

MinIO is a Amazon S3 compatible ultrafast cloud service and supported by napp-it 19.12/ 20.x as a filesystem property. It is included in the OmniOS extra repository. From first tests, the OmniOS binaries are working on OpenIndiana and Solaris, https://www.napp-it.org/doc/downloads/minio.zip

Copy the /opt/* files to /opt/ and set the binaries for minio, minio client and rclone to executable
more, https://forums.servethehome.com/index.php?threads/amazon-s3-compatible-zfs-cloud-with-minio.27524/
 

sth

Active Member
Oct 29, 2015
296
44
28
I have not done such tests and I doubt it makes sense. SMB is a filesharing protocol where you can work directly on a share with a full featured filesystem, filelocking, user dependent permissions etc. S3 is object storage without such features only optomized for availability in a cluster environment, performance and scalability to Zetabytes.

If so, you can only compare to other cloud services that offer a simple upload/download/sync and share ex via Apache webserver based tools or a server like Titan that also offers web access but that respects Windows AD permissions. In this case, minIO and S3 seems to be much faster.
I use SeafilePro that supports both SMB and S3 backends. I see ~150MB/s accessing MinIO hosted local storage via WiFi, several times faster than the same files accessed on a SMB backend.
 

ARNiTECT

Member
Jan 14, 2020
58
3
8
What would the suggested setup be for backing up from a napp-it server in one location to a remote napp-it server using S3/minIO?
 

sth

Active Member
Oct 29, 2015
296
44
28
Look into minio help, specifically.

Code:
./minio-mc mirror
NAME:
  minio-mc mirror - synchronize object(s) to a remote site

USAGE:
  minio-mc mirror [FLAGS] SOURCE TARGET

FLAGS:
  --overwrite                        overwrite object(s) on target
  --fake                             perform a fake mirror operation
  --watch, -w                        watch and synchronize changes
  --remove                           remove extraneous object(s) on target
  --region value                     specify region when creating new bucket(s) on target (default: "us-east-1")
  --preserve, -a                     preserve file(s)/object(s) attributes and bucket policy rules on target bucket(s)
  --md5                              force all upload(s) to calculate md5sum checksum
  --multi-master value               enable multi-master multi-site setup
  --disable-multipart                disable multipart upload feature
  --exclude value                    exclude object(s) that match specified object name pattern
  --older-than value                 filter object(s) older than L days, M hours and N minutes
  --newer-than value                 filter object(s) newer than L days, M hours and N minutes
  --storage-class value, --sc value  specify storage class for new object(s) on target
  --encrypt value                    encrypt/decrypt objects (using server-side encryption with server managed keys)
  --attr value                       add custom metadata for all objects
  --encrypt-key value                encrypt/decrypt objects (using server-side encryption with customer provided keys)
  --config-dir value, -C value       path to configuration folder (default: "/root/.minio-mc")
  --quiet, -q                        disable progress bar display
  --no-color                         disable color theme
  --json                             enable JSON formatted output
  --debug                            enable debug output
  --insecure                         disable SSL certificate verification
  --help, -h                         show help

ENVIRONMENT VARIABLES:
   MC_ENCRYPT:      list of comma delimited prefixes
   MC_ENCRYPT_KEY:  list of comma delimited prefix=secret values
Depending on the amount of data change you might be able to get away with a daily or weekly sync. If you do any research please follow up here as this is something I'm looking into currently.
 

gea

Well-Known Member
Dec 31, 2010
2,502
842
113
DE
I had a Covid related discussion about a situation where files are on a local napp-it ZFS filer. Locally all are working directly on the server via SMB. The local multiuser access with all the permission restrictions should be preserved. Some of the files should be accessable from home via internet with a two way sync of newer files. A complete move of all files to a Cloud approach (Amazon S3 or compatible, Dropbox, Gsuite, or Owncloud) is not an option due the above and due data security and privacy rules.

The first idea was a VPN. This is secure and offers access to all files the same way as when you are working in office/ on LAN. This idea was discarded mainly because of the limited performance and the hassle with VPN clients.

The second idea was a Titan sftp/ftps/https server that gives Internet access to selected local files on internal storage servers based on Windows AD user/groups with all local permission settings intact. This was discarded due the price of Titan with https (around 2k Euro with https, 600/1200 Euro without https).

The current idea is to share one or some ZFS filesystems via minIO/S3. This gives secure https and ultra easy and ultra fast web access from the internet with sync options via one of the Amazon S3 client apps for user or groups who know the name/pw of the S3 share. While it would be possible to share and access the same files via SMB and S3, this is not recommended as there is no common file locking option and you may want to S3 access only some files. For a single user or homeserver sharing a filesystem via S3 and SMB concurrently may be the easiest option.

The missing link is a sync option of these S3 files with the most current files normally on the regular SMB area of the NAS. This can be done by a simple two way local rsync script that syncs the wanted files from the normal SMB storage to the S3 filesystem based on date with ZFS snaps/ Windows previous versions for undo/versioning.

You can start such a sync script via an "other job" in napp-it ex once or twice a day. You should publish these sync times to avoid open files over SMB during a sync, otherwise use a snap as source for rsync to S3. A rsync S3 -> SMB area may simply fail with an open/locked file.

With rclone it should be possible to keep a local folder in sync with a regular Cloud offer like Google or S3/Amazon , Google drive.
 
Last edited:

ARNiTECT

Member
Jan 14, 2020
58
3
8
Look into minio help, specifically.

Depending on the amount of data change you might be able to get away with a daily or weekly sync. If you do any research please follow up here as this is something I'm looking into currently.
Thanks, I'll look into this and report back.
 

Bronko

Member
May 13, 2016
102
7
18
101
Ok, first setup was successfully:

Screenshot from 2020-08-27 21-10-49.png


and MinIO Browser works as expected:

Screenshot from 2020-08-27 21-16-26.png


But Duplicati fails to connect by "All access to this bucket has been disabled."

Screenshot from 2020-08-27 21-17-15.png

(same for Amazon AWS SDK)

Anyone checked in the same way?
 
Last edited:

sth

Active Member
Oct 29, 2015
296
44
28
you might need to set an appropriate region, Ive had some services not connect without one.
 

gregsachs

Active Member
Aug 14, 2018
310
85
28
I'm running an older verion of Minio on a pi, 2017-08-05T00:00:53Z, with duplicati.
duplicati settings I use:
Amazon S3 storage
custom server url
bucket name is backups, but was created by duplicati I think
bucket region is default
storage class is default
client library is amazon aws sdk.
This works for multiple machines to the same bucket, just using a different path in the bucket for each machine.
 

Bronko

Member
May 13, 2016
102
7
18
101
Thanks, Test connection worked now by these Duplicati settings:

Screenshot from 2020-08-28 09-29-44.png

(without 'minio' at Bucket name and adding region, Folder have been created by Duplicati on storage)

Furthermore I got errors (about encryption) and warnings (regarding user rights) for running backup, but I think this will miss the thread...
 
Last edited:

nle

Member
Oct 24, 2012
200
11
18
Sweet. I missed this.

Been using rclone (with a custom backup script) for years (vm with NFS Access) to backup encrypted to GSuite (Google drive). But if I can do it (easily) directly on OmniOS thats sweet.
 

Bronko

Member
May 13, 2016
102
7
18
101
Yes, @gea described and tested it, think so, above.

Duplicati was only a very first -arch linux- client test against MinIO on OmniOS (napp-it managed) from my side ...
 
Last edited:

nle

Member
Oct 24, 2012
200
11
18
How is duplicati working for you guys? Used it years ago, but it managed to corrupt the database (multiple times), so I ended up using a simpler solution.
 

Bronko

Member
May 13, 2016
102
7
18
101

gea

Well-Known Member
Dec 31, 2010
2,502
842
113
DE
S3 sharing is not a replacement for an ftp or webserver.
The share option is there to offer an anonymous but secure access to a file for up to a week.
 

Bronko

Member
May 13, 2016
102
7
18
101
Since the unhashed 'AWS Access ID' and 'Region Value' are link included the anonymity is reduced. Maybe its more a bug than a feature, but nice to have here for some rare situations.

The possibilities of MinIO are amazing and for public sharing I will still prefer my Seafile Server (different host).