Which ESXi "All-in-One" NAS is easiest to install/learn/use?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I just took a quick read through that FlexRAID article, and while I found quite a few minor points that I disagree with I think he does come to the right conclusion at the end - physical mode RDM should be just fine for any virtualized storage platform and gives the best combination of flexibility, performance, and features.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
You need to make four decisions.
No one of them really depend on another.

How does ESXi offer disks to VMs
- pass-through storage (barebone alike, native access)
- RDM (adds a disk layer from ESXi)
- virtual disks (for non-storage VMs only)

What OS
- All possible from Apple OSX to Linux/Unix and Windows

What filesystem
- "oldstyle" like ext4, HFS or NTFS
- Copy On Write like btrfs, ReFS or ZFS with realtime checksums and snaps

What kind of Raid
- Realtime Raid (like raid 1-6, raid-Z)
- Backup Raid/ Raid alike backup on demand ex SnapRaid
 

F1ydave

Member
Mar 9, 2014
137
21
18
I tried it. Kennedy's guide had an error in it, so I reverted to your readme file. That got it working under ESXi using an E3-1230V3, but I nonetheless found it too overwhelming for me to use as a near-term solution. That's just me though, and not really a knock against your system. Probably it's just fine for file system experts, which I'm not.

I'm trending toward flexraid, because so far it is the only solution I've found that advocates using RDM instead of vt-d (Storage deployment on VMware ESXi: IOMMU/Vt-d vs Physical RDM vs Virtual RDM vs VMDK - FlexRAID ) which may make it the only option for utilizing a C2758 rangely solution, since Intel Atom CPU's don't support vt-d.
A friend of mine runs UNraid, and loves it. Early on he had issues with WD Greens, but that has been patched out for over a year now. He bought the 2 license combo so I would set one up. I got an extra server for it...just havent gotten around to it yet. He only uses it for file storage, large amounts of photos (amateur photographer) and about 35 of his blurays on plex.
 

HellDiverUK

Active Member
Jul 16, 2014
290
52
28
47
UnRAID is pretty good, I have a licence for it here, and I've used it on and off. It supports btrfs and xfs now (it only supported reiserfs before), and you can run Docker appliances on it as well.

I'm not really sure why I don't run it, I just end up back with Windows for some reason. It's no fault of UnRAID, it's just I'm more comfortable with Windows.
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
How's unraid performance nowadays? Last time I checked my buddy was running it and had very slow performance. Its designed more for supporting different drive sizes and fault tolerance as you still have data directly on the drives even if you lose more than parity allows
 

HellDiverUK

Active Member
Jul 16, 2014
290
52
28
47
No problems with it here, though it is running on a beefy machine. It goes as fast as Windows 2012R2 on the same machine. unRAID6 runs a very modern linux kernel, and they're running the latest version of SAMBA. You can also run a cache drive, which speeds things up.

Writing directly to the unRAID array it runs about 60MB/s to Toshiba MC04 drives, one data one parity. I use a 320Gb WD Black 2.5" for cache, so writes are full gigabit speed.
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
Cool thanks for the info, my buddy was only getting 25MBps, enough for a couple media streams. When you say as fast as Win2012R2, are you using storage spaces with same hardware?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
You need to make four decisions.
No one of them really depend on another.

How does ESXi offer disks to VMs
- pass-through storage (barebone alike, native access)
- RDM (adds a disk layer from ESXi)
- virtual disks (for non-storage VMs only)

What OS
- All possible from Apple OSX to Linux/Unix and Windows

What filesystem
- "oldstyle" like ext4, HFS or NTFS
- Copy On Write like btrfs, ReFS or ZFS with realtime checksums and snaps

What kind of Raid
- Realtime Raid (like raid 1-6, raid-Z)
- Backup Raid/ Raid alike backup on demand ex SnapRaid
Can you elaborate on the Pros & Cons for a NAS (or rather any ESXI data stores) between the options you presented:
- Pass-Through (Pros/Cons)
-RDM (Pros/Cons)
-Virtual Disks (Pros/Cons -- Are they required to run the VMs on 'only'?)

What about Pros/Cons of filesystem for NAS (or rather any ESXI data stores):
- EXT4, HFS, NTFS
- ZFS
- ReFS
- BTRFS

I'm in the same situation and will be setting up my ESXI w/NAS VM here in this coming week and have planned:
- 5x5TB WD RED RAID6 (hardware w/LSI) -- NAS duty, iScsi/Shares, Streaming/Plex, ETC
- 4x200gb S3700 RAID 10 (hardware w/LSI) -- For the VMs to run on/filesystem.

MAYBE:

Potentially another 8x300gb 15k RPM SCSI for NAS Shares/Iscsi in RAID10, just to have some other fast array for databases, etc... I have a 16bay Rackable JBOD I may fill with the 15kRPM 300gb Discs, and use an external RAID controller, but I'd like to know if I can bring this online and offline w/out rebooting ESXI or the NAS VM!! I have almost 30 of these and want to put them to use :)

Potentially 2x2TB RAID1 (hardware/LSI) - For "backup" of important stuff on the RAID6... this may go in a complete separate system that manages the tapes. To simply keep it in another machine.

(I'm not 100% sure on how i plan to to do data/hd configuration between my 3 (new) ESXI hosts, and SSD and various HDs, I do know the 1 machine will be 24/7 the other 2 won't so that's what's driving some of the decisions).

I haven't made up my mind about what you mentioned, and would like to learn more and your opinion on the matter is valued as well.

WIll start my own thread for my build, but though the NAS/FileSystem question was on-topic for this.
THANKS!
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
I would strongly recommend you do some research into NASes before jumping in, especially virtualized and get comfortable running it natively, just IMO. To answer some of your questions:
  1. Passthrough - pro: allows direct hardware level access to PCI controller, the VM would see controller and all drives attached as if it's running natively. con: that controller and all drives connected will only be accessible that that single VM. i.e. your plan to run both the storage array on it along with data store for VMs will not work unless you share datastore back to ESXi via NFS, but then where do you store the VMDK for that VM, chicken and the egg. Also make sure your motherboard bios + chip supports VT-d
  2. RDM - from my limited understanding it's not recommended for ESXi, not offficially supported and most people needing direct disk access uses passthrough.
  3. Virtual Disks - mostly performance issues, this is basically creating a big "file" on the physical disk that's controlled by ESXi, fragmentation, provisioning etc becomes and issue, also you lose ability to monitor disk via SMART etc.
If you plan to run HW RAID for your data array, I'm sure 99% of people will agree you need to pass that through to the NAS VM and allow it to manage/monitor the controller and disks, otherwise you're really playing with fire doing VMDK through ESXi and will have major performance issues.
However, if you do passthrough, you will run into the issue I mentioned above which is you can't run your SSD array off of the same controller and use it for data stores. Another note is your expander/chassis, if you connect the expander to the RAID/HBA that's in passthrough mode, it's also passed-through so all drives in that backplane is basically only accessible to that single VM. With multipath SAS you can probably do something fancy with another controller hooked up but I wouldn't do it

Again, my recommendation if you haven't ran a NAS system on any of the solutions such as Napp-it, Nexenta, OVM, FreeNas etc then do a setup first where you run it natively and get familiar, then if you have compelling reason move to ESXi. Otherwise you're overcomplicating and risking your data for little gains
 
  • Like
Reactions: wlee and T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I already have a Synology 5Bay, 4x1Gig setup I'm wanting to replace with a VM in ESXI because I got my VMUGADVANTAGE and have the ability to run it on 3 Hosts now, and want to sell the Synology to pay for some of the new larger discks ;) If I was to go pre-made again I'd probably go with something a little more powerful, but spending ~$1k for a NAS is silly when I can run it in ESXI just as good, and save money, which is ultimately the goal. No pre-made nas for 1-2k will have 2x E5's in it :)

So, it sounds like the best option for the RAID6 storage array is to pass it through to the NAS VM and let it manage it 100% for shares, and other software I run in that NAS/vm.

What about the SSD array, if I pass that to the NAS too won't performance degrade severely?
Maybe I should go with a 4 Disk RAID6 ARRAY, and a 4 DISK RAID10 of SSD Array same controller, and pass both through to NAS, and then have two separate shares for storage accessible to the other VMs, and network potentially. I could essentially use the expander too, and do the 5 disks off the expander and 4 SSD off the other port on the RAID card, that way in the future I could add additional disks to the RAID6 array for the NAS/vm to manage, correct?

Is my only option to use another RAID Card for the other drives / SSD if I want to use it to store the VMDKs?

The expander thing again sounds workable, I could have the NAS VM manage that share as well as mentioned above, I believe.

My concern now is at what IOPs is the NAS VM going to become limited, and just how much CPU will it need. I am looking to using 2x E5-2620 for the ESXI host, and my primary other usage will be Blue IRIS VM for security/cameras ~6 3mp 5-10fps.
(Again, I'll post this in my own build thread, but I think it's relative to this when trying to do it all in 1 ESXI box for 24/7, and low power).

And then misc VMs for testing/development. (Apache, MySQL, various other OS's Win7/Win8/Win2012 to test software/etc)
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
Running a Synology vs. building your own esp. in a VM environment are 2 very different things, I'm a bit more conservative with production storage etc so my recommendation is based on that.
SSD array being passed to the NAS server would have similar performance as running baremetal, that's how passthrough is designed. If your question is shared speed, it's the same as if you made a SAN/iSCSI target and shared it over whichever network, except intra-VM xfer is very fast and will exceed that "gigabit" virtual network card you have.
Basically most people I know don't let ESXi manage "storage" except VM datastores and use them for VMDKs etc. VMDKs are flexible and foolproof since they're just files and if you nuke one oh well, you sacrifice performance for that. For production type storage, most people will either run HW RAID passed to a NAS VM to control, or SW RAID, ZFS, BtrFS, StorageSpaces whatever also passed to the a NAS VM to drive as well.

Good luck whatever approach you take.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Thanks, that's EXACTLY the information I was looking for.

I understand they're different, and that's why I'm making some archive cold copies of the important data :) The idea is to learn VMWARE, and also utilize it and learn more from the experiences than just testing temporarily here and there.

I appreciate your time and thoughtful replies it was greatly appreciated!!!
 

chinesestunna

Active Member
Jan 23, 2015
621
191
43
56
My pleasure, glad I can help out :). Good to hear you'll have backups, I've read too many tragedies about folks losing data in redundant systems.
Looking forward to your build log as Im writing mine
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
The performance hit for using VMDK files on a VMFS filesystem is very low and not something to worry about. I'm also not sure where this belief that RDM's are not recommended or supported is coming from - it seems to me to be a case of people just keep repeating it until everyone believes it, while the truth is that RDMs are supported by VMware and are recommended for a variety of use-cases.

To try and clear things up a bit, what is called 'passthrough' in this thread is referring to passing an entire PCIe device through to the VM. It is the highest performance way of giving hardware to a VM, and comes with the most restrictions. Yes, the VM gets full direct access for eg. smart data, etc. but the VM also needs drivers for the PCI card(s), and vMotion, HA, snapshots, fault-tolerance, and probably other VMware features are not supported and will not work against a VM using PCI pass-through. For this situation, keep in mind that if you are using a hardware RAID card that you must pass the entire card through to a single VM if you go this route. You can't pass through a RAID-6 array to a NAS VM and also use the same RAID card for a SSD array as a local VMFS filesystem.

RDM can also be thought of as a pass-through technology - in physical RDM mode it is a (almost) full SCSI passthrough with only enough virtualization to make it appear that the SCSI device is attached to the virtual LSI (or pvScsi) controller in the VM. Smart still works with physical mode RDM, and if the device to be RDM'd can be accessed by multiple hosts (eg. iSCSI, FC, etc.) then you can still use vMotion, HA, etc. - I think the only thing that you can't do with a physical-mode RDM is use VMware snapshots. Virtual mode RDM does add quite a bit more abstraction between the hardware and the VM - smart won't work anymore, though VMware snapshots can work against virtual-mode RDMs. Physical mode RDM in this situation would allow you to eg. pass a RAID-6 array from a hardware RAID card into a NAS VM, and allow a second array from the same card to be used for local VMFS or pass to a different VM.

VMDK files are your last option, and are probably what about 99% of all virtual machines use for their disk space. VMDK files can live on either a VMFS-formatted disk (when the ESX host has block-level access to the disk, eg. SATA, SAS, iSCSI, FC, etc.), or a network share (NFS v3). VMDK files involve the most abstraction of any of the storage options completely hiding the details of the storage from the VM (possibly bad if your VM is a storage management platform, but perfectly fine for everything else), but in exchange they grant the most flexibility and work with every feature VMware has to offer. When you change a server's hard-drive into just a regular file life as a sysadmin becomes so much easier - servers can grow, shrink, move around, make copies (for backup or cloning purposes), etc. as easily as you can do those things to a file on your desktop. And for most things the performance difference is barely measurable - in certain situations with fresh VMDK files there can be an initial penalty as the disk is zero'd, though in some other situations VMDK files can actually perform better than the disk they're sitting on.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
What about Pros/Cons of filesystem for NAS (or rather any ESXI data stores):
- EXT4, HFS, NTFS
- ZFS
- ReFS
- BTRFS
Decided to respond to this part as well but in a second post to keep lengths under control...

EXT4, HFS, NTFS (and many, many others) - regular journalling filesystems just like we've all been using on our desktops for as long as I can remember. Well..., actually I have memories from the days of FAT, FAT32, ext2, etc. but whatever... modern filesystems pretty much all use journalling and have for a very long time. I won't say anything else about them, but will use them as the benchmark to compare other filesystems to.

ZFS - probably not the very first, but at least the first popular Copy-on-Write filesystem, with all kinds of extra goodies for data protection and also having its own built-in device management layer to replace traditional RAID. Everything is checksummed before being written to disk and verified on read - so long as there are redundant copies or parity a bad read can be detected and repaired. The Copy-on-Write design also protects it against errors caused by a crash, power-outage, or other dirty shutdown. As far as any kind of protection goes ZFS is at the top (it still doesn't replace backups), and with the right configuration (lots of RAM, SSDs to cache, etc.) it can also perform quite well. It's downsides are its inflexibility once configured (growth can only happen in certain ways - no re-striping to add a single disk to a parity-raid, and no shrinking at all), and that its license mostly restricts it to solaris and BSD based OSs - ZFS on Linux is getting better all the time but the license issue will always mean you need to add it yourself to a linux build it cannot be distributed as part of the kernel.

BTRFS - a new Copy-on-Write filesystem that is still in development (technically, ext4, ZFS,, NTFS, etc. are still getting new features added too) that does most of the things ZFS does while also adding a few new things. In theory it offers the same protection as ZFS, though the reality is that many features are not yet stable enough and can cause data corruption - core btrfs is quite stable, btrfs mirroring is also stable and well tested, but btrfs parity-raid (eg. RAID5/6) has only been completed for a couple of weeks now and is virtually un-tested and likely still has bugs. Performance tuning of btrfs has also mostly not even started yet, so while it is very fast at a few specific things that can exploit its Copy-on-Write nature in most general purpose stuff its rather slow. In my opinion its big advantage over ZFS is the flexibility to change it while it is in use. You can grow it one drive at a time and it will re-stripe existing data across new drives, even if every drive is a different size it will efficiently use all of the capacity of every drive. You can also remove drives and shrink the filesystem while it is online. It's not quite ready for large-scale use just yet, but its not far out anymore and will be awesome when its ready.

ReFS - I've actually done only a very small amount of reading on the topic, so I don't have much to say about this. But it seems to me as though MS realized it needed something to compete with the level of data protection offered by ZFS/BTRFS especially as spinning disks keep getting bigger and bigger, so they came up with ReFS. I seem to remember that I stopped looking into it because it was not supported and couldn't be made to work in my work environment - some restriction of not working together with failover-clustering, or shared FC storage, or something. It's not something I would use on a NAS for VM storage anyways, as the last time I tried MS's NFS3 implementation was horrible. But if you're only dealing with windows clients and windows servers its probably fine.
 
  • Like
Reactions: wlee and NeverDie

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
The performance hit for using VMDK files on a VMFS filesystem is very low and not something to worry about. I'm also not sure where this belief that RDM's are not recommended or supported is coming from - it seems to me to be a case of people just keep repeating it until everyone believes it, while the truth is that RDMs are supported by VMware and are recommended for a variety of use-cases.

To try and clear things up a bit, what is called 'passthrough' in this thread is referring to passing an entire PCIe device through to the VM. It is the highest performance way of giving hardware to a VM, and comes with the most restrictions. Yes, the VM gets full direct access for eg. smart data, etc. but the VM also needs drivers for the PCI card(s), and vMotion, HA, snapshots, fault-tolerance, and probably other VMware features are not supported and will not work against a VM using PCI pass-through. For this situation, keep in mind that if you are using a hardware RAID card that you must pass the entire card through to a single VM if you go this route. You can't pass through a RAID-6 array to a NAS VM and also use the same RAID card for a SSD array as a local VMFS filesystem.

RDM can also be thought of as a pass-through technology - in physical RDM mode it is a (almost) full SCSI passthrough with only enough virtualization to make it appear that the SCSI device is attached to the virtual LSI (or pvScsi) controller in the VM. Smart still works with physical mode RDM, and if the device to be RDM'd can be accessed by multiple hosts (eg. iSCSI, FC, etc.) then you can still use vMotion, HA, etc. - I think the only thing that you can't do with a physical-mode RDM is use VMware snapshots. Virtual mode RDM does add quite a bit more abstraction between the hardware and the VM - smart won't work anymore, though VMware snapshots can work against virtual-mode RDMs. Physical mode RDM in this situation would allow you to eg. pass a RAID-6 array from a hardware RAID card into a NAS VM, and allow a second array from the same card to be used for local VMFS or pass to a different VM.

VMDK files are your last option, and are probably what about 99% of all virtual machines use for their disk space. VMDK files can live on either a VMFS-formatted disk (when the ESX host has block-level access to the disk, eg. SATA, SAS, iSCSI, FC, etc.), or a network share (NFS v3). VMDK files involve the most abstraction of any of the storage options completely hiding the details of the storage from the VM (possibly bad if your VM is a storage management platform, but perfectly fine for everything else), but in exchange they grant the most flexibility and work with every feature VMware has to offer. When you change a server's hard-drive into just a regular file life as a sysadmin becomes so much easier - servers can grow, shrink, move around, make copies (for backup or cloning purposes), etc. as easily as you can do those things to a file on your desktop. And for most things the performance difference is barely measurable - in certain situations with fresh VMDK files there can be an initial penalty as the disk is zero'd, though in some other situations VMDK files can actually perform better than the disk they're sitting on.
Great write-up!

If you were setting up a VM to have ZFS on it, is it possible to hand the ZFS "disks" in the form of separate VMDK's, rather than do the VT-D or RDM pass-throughs? I am guessing the answer is no, but I'm not sure why it wouldn't work except that possibly the ZFS is then working with an abstraction that's in some way incongruent with the physical disk it's expecting to control.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I want to make sure I'm understanding this properly.

I can pass-through the entire Controller/Expander to the VM which is running NAS__X__ software.

However, I'm not 100% clear on:
- Can I have multiple arrays on the RAID controller/expander passed through to the NAS VM? It sounds like "hardware passthrough" but then there was mention of not running the SSD on the array, and I'm pretty sure that's for ESXI guest OS hosting, not for the NAS VM w/passthrough of the controller&expander, but I want to be 100% on this.

My goal is for the NAS to manage the access/controls for the shares on the network/esxi hosts. -- Would using VMWARE software be better/faster for this?

I have the VMUGADVANTAGE license, and 3 systems (that are 2x CPU) for hosts. I don't want to run all 3 24/7 unless the power usage is rather-low. 1 is a E5-2620 v1 (plan to run 24/7 if it can handle what I need 24/7), the other two hosts are are dual E5-2683 v3s so they will idle down a good bit more (hopefully SM mobo allows this) than the SandyBridge combo, but I haven't done testing yet to see. All 3 are using desktop SeaSonic Platinum or Gold PSU 700w+ and "Haswell" approved. I do NOT mind running them all the time if power consumption isn't too bad, so I will check this out. I think it's time to start my own build/advice threads :) getting to much into "my situation" i fear for this thread.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Great write-up!

If you were setting up a VM to have ZFS on it, is it possible to hand the ZFS "disks" in the form of separate VMDK's, rather than do the VT-D or RDM pass-throughs? I am guessing the answer is no, but I'm not sure why it wouldn't work except that possibly the ZFS is then working with an abstraction that's in some way incongruent with the physical disk it's expecting to control.
ZFS on top of VMDK files will work just fine, but you are possibly mixing technologies in a way that will prevent you from getting some of the benefits of ZFS. If you had 5 drives, formatted each as a VMFS datastore, created a single VMDK on each for the NAS VM, and then created a ZFS from that, then the only thing you would be missing in your NAS VM is access to the smart data from the drives and the ability to control their spin-down. But if those 5 drives were connected to a hardware RAID controller, turned into a single array formatted as VMFS, and then a VMDK from that was passed to the NAS VM, then ZFS effectively only has a single disk and you lose its advanced data protection features. Worst case, if you had a 2TB and a few 1TB disks, and created as many 1TB VMDK files on them as you could (having 2 VMDK's on the large disk) and then passed all of that into a RAID-Z ZFS config, if that 2TB disk dies you lose 2 VMDK files (what ZFS sees as losing 2 disks) and you've lost the entire array.

Passing everything into ZFS as raw as possibly (or any other storage management type of VM) can protect you from doing stupid things, at the cost of some flexibility. The extra layers of abstraction provided by virtualization give tons of flexibility and options, but some of those options are bad and its left to the user to do the right thing.