NAS and Storage Options

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Brian McGahuey

New Member
Apr 12, 2018
18
2
3
41
So, greetings all! I just registered here, and am looking for some thoughts on what people more experienced in the matter than I think I should do.

I deal alot with RAW video footage, 3D renders, and other forms of media. Currently, the team I work with shoots primarily with a Red Scarlett (which is 4K RAW footage), and I also shoot with a Sony FS700/Atomos Shogun combo (which is also 4K RAW).

Now, without going into data rates/etc, I've determined that what I need is a MINIMUM of 500MB/s bandwidth into and out of my storage system. I'm shooting for enough bandwidth with my storage speeds to be able to saturate a 10GBe link, even though that link won't be available at first.

Storage will be handled by either an i7-920 with 18GB of ram (probably running something like freenas), or a dual core celeron system I have in an ITX board, and this will be powering a 12 bay 4U enclosure with 12x 3TB drives.

Originally, I was planning on running Xpenology on proxmox, but I decided against it. I've also tried openmediavault, and while good, I'm just not sure it's for me.

There are also a few apps I'd like to run for remote collaborators, and I'd like to keep these isolated from the rest of the system (which means containers, of some type).

I'd prefer a storage type OS thats geared for NAS use. I CAN dive into config files if need be, but honestly, editing a config file to add shares, then adding users, etc all begins to be a hassle to me without some sort of simple GUI.

This brings me to my big question.

Do you see any issues running something like freenas, or Nas4Free via proxmox, or is it better to run those on bare metal, having proxmox seperate?

I'd PREFER to have things on the same box if possible, due to the fact that I have limited power delivery in an old house, and no access or funds to pay an electrician to put a new line in.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
PCI passthrough (vt-d) can be a bit goofy to configure in Proxmox. And that's the only way I would consider running freenas in a VM. And I don't think your CPU supports vt-d anyway, looking at Intel Ark.

What about using something like webmin to configure users/shares? You would have to manage ZFS using the command line, but that's pretty easy. You could put webmin in a container and bind mount some ZFS filesystems into it to provide some isolation.

If that won't work for you, investigate FreeNAS plugin/jails/containers/VMs. I know you used to be able to do some interesting things in jails, but I don't know the current state of things on FreeNAS. I know there is at least some VM/container tech available. This also assumes your hardware is FreeNAS compatible, which it probably is, but good to verify.
 

Brian McGahuey

New Member
Apr 12, 2018
18
2
3
41
I know I can pass drives directly into vm's (which is how I got xpenology up and running).

I've looked in the bios, and remember seeing an option for vt-d, so maybe it does? I'll have to double check tonight.

As for webmin, I could go that route, but the way it mucks up configuration files is not good if I need to get in there. And, from what I can gather, there are other good options nowadays, including cockpit, ajenti, and webyast.

Is there really that much of an advantage using zfs vs Linux software raid 6? Ultimately, I'm shooting for speed here, with redundancy just for peace of mind.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
In my opinion, yes. Enough that I ran Solaris when that was the only option for ZFS. Checksums and the raid 5 write hole were enough to convince me. Snapshots and a bunch of other features are also very nice.

Passing disks in is, also in my opinion, not good enough. There are still layers in the way that can cause issues. I want the storage server in full control. I even prefer that to be on bare metal, which is one reason why I run Proxmox the way I do. It works for people, but I have seen people complain about data loss with disk passing setups. At the end of the day, it's your data and only you can choose.

As for webmin and config files, that's the case with every tool. They all do some weird things to config files. If you want it "just so", hand edit them. If not, you just have to accept the weirdness.
 

Brian McGahuey

New Member
Apr 12, 2018
18
2
3
41
In my opinion, yes. Enough that I ran Solaris when that was the only option for ZFS. Checksums and the raid 5 write hole were enough to convince me. Snapshots and a bunch of other features are also very nice.

Passing disks in is, also in my opinion, not good enough. There are still layers in the way that can cause issues. I want the storage server in full control. I even prefer that to be on bare metal, which is one reason why I run Proxmox the way I do. It works for people, but I have seen people complain about data loss with disk passing setups. At the end of the day, it's your data and only you can choose.

As for webmin and config files, that's the case with every tool. They all do some weird things to config files. If you want it "just so", hand edit them. If not, you just have to accept the weirdness.
Fair enough. It sounds like my best option is to go freenas/nas4free/openmediavault running zfs on bare metal. I'll just have to build a proxmox box later down the road, if I feel the need.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
For your performance needs you may want to try Oracle Solaris, the origin of ZFS and with a genuine ZFS. Solaris (and its free forks) offer the best of all ZFS integration into the OS, the best performance in all my tests, the most storage relevant features and a unique integration of storage services like FC/iSCSI, NFS or SMB. These services are either invented by Sun or included as Solaris kernelservices (no 3rd party tools needed).

The free Solaris forks like OmniOS or OpenIndiana offer the same user experience but a slightly lower performance (although quite the best in the Open-ZFS area). The premium of Solaris over its free forks are

Oracle Solaris vs its free Open-ZFS forks
SMB3 vs 2,1 (only NexentaStor has v3 and is Solaris based Open-ZFS)
NFS 4.1 vs 4.0
Solaris is the only OS with encryption within ZFS
Sequential resilvering (much faster)

If you compare the Open-ZFS options based on Free-BSD vs Solaris ones, the latter offer the better ZFS integration. OmniOS is also the first Open-ZFS with global checkpoints and vdev remove. OmniOS is also the smallest full feature storage OS with a stable every 6 months intended for professional use.
 

Brian McGahuey

New Member
Apr 12, 2018
18
2
3
41
I do appreciate the advice, but I must say, I'm a tad reluctant to use Solaris. At least with freenas and it's derivatives, I shouldn't need to mess with the underlying os, and since I have zero experience with bsd, that would make it manageable, unless something breaks.

Solaris would be even more foreign to me.

At least with Linux, if for some reason something does break, I can reasonably expect to be able to fix it without having to reinstall everything.

I'm certainly not opposed to running it, but given experience, I don't necessarily think it would be my first choice.

EDIT

One thing I didn't actually consider was running proxmox, and putting my filesharing system inside of a container. I'd still have zfs if I want to go that route, and would just pass the mounted array into the samba container......
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Free-BSD and Solaris are Unix derivates that are quite similar with a common history. For both you can have webmanagement. For Free-BSD there is FreeNAS or Nas4Free. For a regular Solaris or OmniOS/OpenIndiana operating system, I have developped napp-it to manage them without a lock-in into a special distribution release.

For Zol, there is no comparable solution, nearest may be OMV but without the same level of ZFS integration or easyness of management. Proxmox is more an alternative to ESXi or SmartOS than a serious storage appliance.

For napp-it and howto install, you can read my 1,2,3..
napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux : Manual
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
If you plan to run a stand-alone file server, napp-it is a good choice. If your hardware is supported in Solaris anyway.
 

Brian McGahuey

New Member
Apr 12, 2018
18
2
3
41
The reason I'm considering using proxmox as the base, with a vm or container handling fileserving duties is due to some of the other things I need to run. I need something in the vein of nextcloud/filerun for sharing files with collaborators (cloud storage is a pain, and expensive for the amount of files we're going to be moving around), I'd like to have pxeboot capabilities for the network (installing, diagnostics, imaging), I have a license server that I'd like to separate out from running on bare metal (easier to maintain this way), etc. There are many things that I'm currently doing that would benefit from a hypervisor.

That's the only reason why that was my inital thought.

Oh. Gea, I see that with napp-it, you support containers on there. That might actually handle the majority of what I do.

Also, I find your site a touch confusing. Is napp-it free? Would I need to buy a license? Is it a management/admin layer that lives on top of solaris, or is it it's own OS, essentially (like freenas/OMV) that is built on Solaris?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
napp-it is a web-management and usability layer ontop of a regular Solaris or OmniOS/OI installation with a lot of tools to make handling easier.

It comes in a free version that covers all essential NAS/SAN features with a pro version that adds support/bugfixes and some extra functionality.

Unlike FreeNAS (Free-BSD) or OMV (Linux) it is not bundled with a special OS release. You can manage the system transparently via console or napp-it or update the OS or napp-it independently.

The LX container solution is a speciality of OmniOS, LX Branded Zones (not in Solaris or OpenIndiana) and can be an alternative for Linux VMs to ESXi + storage VM + other VMs (any OS like BSD, Linux, OSX, Solarish, Windows)
 
Last edited:

Brian McGahuey

New Member
Apr 12, 2018
18
2
3
41
Awesome! Sounds like it might fit the bill. I'll have to install it in a vm tonight and poke around before installing it on bare metal.
 

Brian McGahuey

New Member
Apr 12, 2018
18
2
3
41
So, I checked out napp-it.....pretty cool little program. I also discovered it runs on linux.....and that got me thinking.....could I run it on proxmox for managing zfs volumes? The answer is yes, yes I can. So, I'm thinking my setup will be this:

proxmox running as the base os. This would give me the virtualization and container tech I'm after.
12 3tb drives running in raidz2, managed with napp-it.

various volumes for various needs stored on the raidz2 array.
- 1TB volume for container and vm disks
- probably 10TB or so for automated backup of machines
- 19TB or so for media and project file storage

this will all be linked together via nfs and smb to containers. For example, I'd like to run something like filerun/nextcloud for collaborators to upload/download media and project files. I'll share out my media and projects shares via smb, and mount them into the nextcloud or filerun container.

samba shares could be handled manually via the config file, or another container handling only smb management. I wish there was something better to handle this, as I remember hating the way webmin worked in the past, but I guess it's my best solution here. Not ideal, but it would work.

backup will most likely be handled via backuppc (which I've used in the past, and loved it, since it was totally automated once set up), which I will likely set up via another container.

I'll likely set up an ubuntu VM, and dedicate that to docker container duties. There's a ton of stuff that is already set up in docker that would save me an insane amount of time vs setting things up manually.

So, I think that's my plan. Now, I just need to get all my drives purchased, which is gonna take a while. :(