Which ESXi "All-in-One" NAS is easiest to install/learn/use?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
So far I've only just installed ESXi, so I am absolutely at the very beginning of the learning curve. However, I'd like to have a ZFS NAS that's accessible from various VM's, and putting them all together sounds great provided that it's rock solid. I'd like to run RAIDz3 for the best odds of maintaining data integrity over time. I'd like to put WHS 2011 in one of the VM's and have it use the ZFS NAS for file storage, mainly because Microsoft neutered WHS 2011 of its file pooling/scrubbing as compared to the earlier release.

Napp-it sounds like it may be the easiest to install (or so the servethehome article made it sound). By now, have any other NAS's had their installs similarly simplified? I've never heard of napp-it outside of servethehome. Is napp-it solid?

My preference would be for freenas, if only because more people use it, so finding good tutorials and/or getting help might be easier. However, I've read some rumors that freenas isn't rock solid when run under ESXi using the vt-d pass-through of the drives. True? Maybe that's why servethehome didn't write about it.

So, that leaves Nexenta, which I've read might be solid. However, outside of servethehome, I've also never heard Nexenta mentioned, which worries me that it may be either an outlier or a flash-in-the-pan.

Any other easy-to-install ZFS NAS's I should consider?

All that being said, I have no prior experience with EXSi, Solaris, or ZFS. I'm willing to learn something about EXSi, but I'd prefer to use it like an appliance. i.e. I really don't want to learn anything about Solaris or the like if I don't have to, and I would hope to learn only the absolute minimum needed to get by on ZFS. Not that I have anything against Solaris, but I have a lot of other things I need to learn too, and I just don't want to be spread too thin by learning unnecessary things.

The platform is:
Suggestions, advice, recommendations?

I'm hoping all this will be easy to learn and have simple GUI interfaces. If it instead turns into a slow slog, then I'd probably rather learn BTRFS on OpenSuse, because it's similar to ZFS, it's considered enterprise solid, and I believe it probably has a long-term future, so it wouldn't be wasted learning. At the moment, though, I'm not finding much tutorials on it, and so I'm hoping one of the ZFS distro's will get me up and running a lot faster and easier.
 
Last edited:

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
I'm wondering if I can maybe avoid the thorny issues above by just installing OpenSUSE as the host OS. OpenSUSE which allows picking BTRFS as the default file system. Allegedly BTRFS confers many of the same advantages as ZFS. OpenSUSE comes with VirtualBox, so then I could open a virtualbox VM with WHS in it.

Is there any significant downside to this approach? It seems like it may be easier. For instance, would that approach side-step the issue of VT-d and pass-throughs?
 
Last edited:

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
976
113
NYC
Nexenta is more of an enterprise solution so it's solid. There is also nas4free.

BTFRS depends on whether of not you are comfortable. It has been in "almost ready" mode for years now.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Nexenta is more of an enterprise solution so it's solid. There is also nas4free.

BTFRS depends on whether of not you are comfortable. It has been in "almost ready" mode for years now.
I'm more concerned about the virtual machine part breaking than I am about native BTFRS breaking. In that sense, the BTFRS aproach seems like it might be safer, because it would sit outside the VM, rather than inside one. When it comes to unknown unknowns, though, it's anyone's guess. I mean, I thought vmware was supposed to be enterprise grade, but when I looked at their forums, it looks decidedly not solid. I'm still not sure what to make of that, though, as I'm not in a good position to really judge it.

Is nas4free thought to be stable if run in an all-in-one, even if freenas isn't? I'm not sure that freenas isn't, by the way. I may just be repeating a rumor. Is it?

For anything archival, about all I can do is ensure there exists at least one validated backup (preferably two, and preferably using a different technology) in addition to whatever I configure here before deleting the source material. For scratch material, I'm prepared to wing it.
 
Last edited:

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Thanks. I began to look into it, but it reminded me that I'm unfamiliar with its track record. I'm not saying it isn't great. It could be perfect. It just isn't touted the way ZFS is and which seems to be the gold standard. I don't know how these things get their reputations really. I suppose it's a combination of high visibility, extensive public scrutiny, and probably institutional recordkeeping on large statistical samples? There are a lot of solutions that look very good, but which don't have that extra level of validation.

So, Miniknight was correct to bring up the long successful track record of ZFS. It differentiates ZFS from a lot of the alternatives. So, maybe the best way to leverage it isn't with a VM, which would partly invalidates the track record by alttering the in-use conditions, but rather just connecting to a separate computer over a high speed cable. i.e. using a physical, not a virtual, machine. That way the test conditions better match the useage conditions which underlay ZFS's track record.

What's the best high speed cable arrangement to use? Infiniband? SAS? 10gigabit ethernet? I looked briefly into 10gigabit ethernet, and it would require a HBA (at around $250 each) on both computers. Would a different technology be cheaper than that? I'm guessing SAS, if only because a lot of motherboards already come with it
 
Last edited:

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
OpenMediaVault is just a web front-end for a lot of very mature back-end technologies (mdadm, samba, etc.) so while it is a relatively new thing its only a management layer - your data isn't at risk due to lack of testing.

Btrfs on the other hand is the data storage, and is also still in heavy development. Yes it is "supported" for enterprise use by Suse and Oracle, which only really means that you can call them for help if you have a problem (and a support contract), not that it will run problem free. I would call btrfs stable/reliable enough for most production use now but only so long as don't make use of quite a few of its features. Basic filesystem is good, subvolumes good, snapshots good (don't use them with a 3.17 kernel though unless you know it has been patched for an issue they had), raid0 and raid1 multi-device support is good, with raid-1 the bit-rot protection is as good as ZFS's. Don't bother with btrfs raid-10, due to the way it works raid-1 is better in pretty much every way. But parity-raid levels are not ready yet - raid5 and raid6 will not have recovery/scrub support until 3.19 and I wouldn't consider them production-safe for at least a few more kernel releases after that. There is no support at all yet for triple-parity either though it is on the roadmap.
 
  • Like
Reactions: NeverDie

gea

Well-Known Member
Dec 31, 2010
3,333
1,296
113
DE
ZFS is production ready for years and it is the current "best of all" technology.
The two competitors btrfs and ReFS are not comparable up to now regarding features or performance.

Another aspect.
ZFS was developped on Solaris but you get it on BSD, Linux and OSX as well.
No other modern filesystem is as universal.

Older technologies like ext4, HFS+ or NTFS are last millenium as they lack
CopyOnWrite (always consistent filesystem) with snaps, scrubbing (online repair of silent disk errors) and realtime checksums with autorepair.

But why not try yourself one of the easy to use web-managed appliances.
You can install FreeNAS or NexentaStor within a short time.
I offer my napp-it with web-UI on OmniOS (free Solaris fork) as a ready to use VM appliance (download, add to ESXi inventory and run - no setup required). If you update napp-it (Menu About - Update) to newest dev edition, you can even include restorable (with hot-memory state) ESXi snaps within ZFS snaps.
 
Last edited:

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
OpenMediaVault is just a web front-end for a lot of very mature back-end technologies (mdadm, samba, etc.) so while it is a relatively new thing its only a management layer - your data isn't at risk due to lack of testing.

Btrfs on the other hand is the data storage, and is also still in heavy development. Yes it is "supported" for enterprise use by Suse and Oracle, which only really means that you can call them for help if you have a problem (and a support contract), not that it will run problem free. I would call btrfs stable/reliable enough for most production use now but only so long as don't make use of quite a few of its features. Basic filesystem is good, subvolumes good, snapshots good (don't use them with a 3.17 kernel though unless you know it has been patched for an issue they had), raid0 and raid1 multi-device support is good, with raid-1 the bit-rot protection is as good as ZFS's. Don't bother with btrfs raid-10, due to the way it works raid-1 is better in pretty much every way. But parity-raid levels are not ready yet - raid5 and raid6 will not have recovery/scrub support until 3.19 and I wouldn't consider them production-safe for at least a few more kernel releases after that. There is no support at all yet for triple-parity either though it is on the roadmap.
Thanks for those insights. Very timely, as I was just about to start learning BTRFS. If BTRFS can't do triple parity, or something equivalent, then for my purposes it's not ready yet. I'm removing it from my short-list.

So, back to the original plan of leveraging ZFS....
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
ZFS is production ready for years and it is the current "best of all" technology.
The two competitors btrfs and ReFS are not comparable up to now regarding features or performance.

Another aspect.
ZFS was developped on Solaris but you get it on BSD, Linux and OSX as well.
No other filesystem is as universal.

Older technologies like ext4, HFS+ or NTFS are last millenium as they lack
CopyOnWrite (always consistent filesystem) with snaps and realtime checksums with autorepair.

But why not try yourself one of the easy to use web-managed appliances.
You can install FreeNAS or NexentaStor within a short time.
I offer my napp-it with web-UI on OmniOS (free Solaris fork) as a ready to use VM appliance (download, add to ESXi inventory and run - no setup required). If you update napp-it (Menu About - Update) to newest dev edition, you can even include restorable (with hot-memory state) ESXi snaps within ZFS snaps.
You're right. For my next step I should kick some tires. If those two or three are the quickest to install and try, I guess it will be them.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Here's one of the postings that's commonly referenced advising not to run Frenas in a VM: Please do not run FreeNAS in production as a Virtual Machine! | FreeNAS Community

Wouldn't the same arguments apply to the others? Yet, for instance, napp-it seems quite comfortable with it, since (as shown in the servethehome article) it even comes packaged that way.

Is one less safe to virtualize than the other, or is it about the same but simply one vendor assesses the risk differently than the other? If the problem is dumb users, I wonder why freenas doesn't just package a self-installing appliance the way napp-it did?
 

PigLover

Moderator
Jan 26, 2011
3,215
1,571
113
There is a lot of FUD on the internet. Lots of 'experts' saying 'never do this' or 'it's a bad idea to do that'. Most of it is not worth the bits used to post it. Most of the time when you see 'lots of people' posting something they are really just bandwagon posts.

Take the one you reference, for example. Here some nuggets of truth are incorrectly expanded into an absolute. Yes - it's a bad idea to use zfs to share virtual disks presented by the underlying hypervisor. Using pseudo-pass through like ESXI RDM is also a bit dodgy (though better).

But then he falls off his horse and extends the argument using 'facts' that are incorrect. He makes statements about using pass through that might have been true 10 years ago but haven't been valid for a long long time. PCI pass through with motherboards supporting VT-d has been rock solid for years. Every version of ESXi since 4.1 has supported it brilliantly. KVM and even Xen support it without issues today. Even Hyper-v drive pass through is solid (though frustratingly it still conceals SMART data). Once you give the VM pass through access to the HBA then all the arguments for no doing ZFS inside a VM just fall away - ZFS is again in full control of its disks and all is well.
 
Last edited:
  • Like
Reactions: NeverDie

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
One other thing about Napp-It, GEA is on here, daily and active. If you have questions, you get it from the source and not 100 jack wagons shooting blanks in the dark ;)
Is GEA from Napp-It? Or mabe GEA wrote napp-it? If so, that would indeed be good fortune. Thanks for your post, as on my first read I wasn't sure what GEA meant by "my napp-it". On a forum among strangers it's sometimes hard to know what might be a language issue and what isn't. It certainly would be awesome to hear from the source regarding my immediately preceeding post on this thread.
 
Last edited:

lundrog

Member
Jan 23, 2015
75
8
8
44
Minnesota
vroger.com
Going to be hard to beat Nexentastor for a free product, as it's based off of a enterprise solution. I have ran openfiler and freenas for years, but they aren't close to as polished.
 
  • Like
Reactions: NeverDie

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
ZFS is production ready for years and it is the current "best of all" technology.
The two competitors btrfs and ReFS are not comparable up to now regarding features or performance.

Another aspect.
ZFS was developped on Solaris but you get it on BSD, Linux and OSX as well.
No other modern filesystem is as universal.

Older technologies like ext4, HFS+ or NTFS are last millenium as they lack
CopyOnWrite (always consistent filesystem) with snaps, scrubbing (online repair of silent disk errors) and realtime checksums with autorepair.

But why not try yourself one of the easy to use web-managed appliances.
You can install FreeNAS or NexentaStor within a short time.
I offer my napp-it with web-UI on OmniOS (free Solaris fork) as a ready to use VM appliance (download, add to ESXi inventory and run - no setup required). If you update napp-it (Menu About - Update) to newest dev edition, you can even include restorable (with hot-memory state) ESXi snaps within ZFS snaps.
Will the exact same directions (http://www.servethehome.com/omnios-napp-it-zfs-applianc-vmware-esxi-minutes/) work for installing the latest (15a) napp-it release (i.e. circa December 2014)? I like the simplicity of Kennedy's install guide: it's clear and yet it seems to outline the shortest path for getting the installation quickly over with. However, time has passed, and with new napp-it releases, I don't know whether it became obsolete.
 

gea

Well-Known Member
Dec 31, 2010
3,333
1,296
113
DE
The guide is ok.

Download the preconfigured VM and add to inventory
(ESXi filebrowser, right click to the .vmx file)
 
  • Like
Reactions: iriscloud

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
The guide is ok.

Download the preconfigured VM and add to inventory
(ESXi filebrowser, right click to the .vmx file)
I tried it. Kennedy's guide had an error in it, so I reverted to your readme file. That got it working under ESXi using an E3-1230V3, but I nonetheless found it too overwhelming for me to use as a near-term solution. That's just me though, and not really a knock against your system. Probably it's just fine for file system experts, which I'm not.

I'm trending toward flexraid, because so far it is the only solution I've found that advocates using RDM instead of vt-d (Storage deployment on VMware ESXi: IOMMU/Vt-d vs Physical RDM vs Virtual RDM vs VMDK - FlexRAID ) which may make it the only option for utilizing a C2758 rangely solution, since Intel Atom CPU's don't support vt-d.