Sick and tired of FreeNAS, need alternates>>>

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jazon12

New Member
Aug 18, 2017
6
0
1
43
I have been running a home server for about six years now, the setup going through various iterations in both hardware and software. The goal has always been the same, which was to virtualize desktops to thin clients through my house with centralized processing, networking, and storage happening in my stack. I took many classes at the local community college over those years, and had access to free Microsoft software, which I of course hoarded like crazy, so I have access to anything Microsoft.
I started using FreeNAS over a year ago to simply serve files and use Plex, but kept running into a brick wall on my virtualization goal. I did not want to dedicate precious server rack room to a server solely for the purpose of controlling a Hyper-V cluster, and waste all that energy. When Corral came out, I took off running with it. Changed my setup somewhat to match the ability of Corral to run a VM, that VM being the Hyper-V cluster master node. Then ix took Corral away, then came back with 11. I installed the VM on that, but kept running into a networking issue that did not exist with Corral. And now my VM is core dumping upon start-up. This is a bug, this is a bug, I keep reading this. I see the writing on the wall.

My confidence with ixSystems is gone now, and I need to move on. I need it to do two simple things: serve files and host a virtual machine. That's it. Can someone please recommend a different vendor? Hardware requirements are no issue because my equipment is decent. Thanks!
 

jazon12

New Member
Aug 18, 2017
6
0
1
43
I am not familiar with kvm or docker. does there exist a CLI bare metal hypervisor that is absolutely free? When it is not Microsoft, I am very much noob...
 

jazon12

New Member
Aug 18, 2017
6
0
1
43
OH and forgot to mention one thing. My boot environment is set on 16GB flash drive, and my storage is already set up for freenas. No way to mess with that because I have no backup of my data. Server is a Dell R510. So anything would have to go on flash drive
 

markarr

Active Member
Oct 31, 2013
421
122
43
KVM is a open source hypervisor like hyper-v. There is also openmediavault that people here have talked about liking as well.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
well , first of all, just as cautionary statement here but if you want Virtualization flash boot is a big red flag for me.
flash is not a good medium for Virt environment IMHO.
I had Rollout Hyper-V 2-node cluster 2 years back and had to upgrade the Server to SSD (big headache apparently with DELL) because the SD card modules on both servers failed intermittently during 6 month period bringing down the cluster.

second, you do not specify what FS you use for your storage array. how big is your storage now.
anything you do will require a backup. if you start messing with your setup, backup is a !MUST! , not just a "good to have" option.

that said,
if you use ZFS for data store as that is what FreeNAS uses most of the time,

OpenMediaVault is a good option
it can be run on flash (use the flashmedia plugin to make it safer)
it supports most mainstream File Systems natively.
nice WebUI and almost non-existing need for CLI , although if you do use ZFS there maybe a need to drop to CLI at the beginning.
supports Virtualization using VirtualBox plugin.
this is a very good File Server first + Virtualization second setup.
I would say to plan on adding at least an extra SSD to put the VM disk on.

you can expand the functionality using plugins. such as, MergerFS to bring all your disks (if you are not using raid of any kind) under single volume share. SnapRaid for data protection. etc.
again, if you are using ZFS than you do not need this.


if Virtualization need to come first than I say Proxmox VE is a good option.

nice WebUI
out of the box support for ZFS, even for boot disk.
not sure if you can run it on Flashdisk though.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
With Proxmox you still need to do zpool creation using the CLI except for the root pool that the OS is installed onto.

At first it may seem like ZFS cli is daunting, but it's like 2-3 commands to create a pool and make a share. Making a VM from there is all GUI based.

It's amazing that it takes literally seconds to have everything up and running. I'm still a fan of FreeNAS for some of the other integrations they've got now. For 98% of NAS users, Proxmox is fine.

I had been worried about ZFS on Linux but it's solid. HW support you can't compare a Debian based Linux distro to a slow moving FreeBSD one.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
USB flash quality is all over the place. But if you take some precautions it works reasonably well. I wouldn't put the VM datastore on it though, put that on the ZFS array the way FreeNAS does.

I like Proxmox, it's free, has native ZFS, and good virtualization features. For the ZFS and file sharing, if you use the host for it, you will use CLI, but the web UI is used for virtualization. You could do it CLI if you prefer though.

Another very popular option is ESXI. I beleive it's free for personal use. You would need to pass the HBA to a VM for storage/shares. Then use that from other VMs. A little more overhead with network based storage for everything, but on modern hardware it's likely not a big issue.
 
  • Like
Reactions: dswartz

jazon12

New Member
Aug 18, 2017
6
0
1
43
Thank you for the replies and ideas. I think the best one for my needs is openmediavault, but I wanted to clear up a few questions though. First, there does not seem to be many people using it. How is the support for this distro? Second, I am running a ZFS share from FreeNAS, but luckily for me the volume is unlocked! How stable is openmediavault with ZFS? Third, the VM I am trying to run originates from the datastore. I was never concerned about speed or performance with it being as its only job is funneling authentication back and forth between the other virtual machines on other systems, so having it on platter disks did not matter to me. And if it came down to blowing that VM away and doing it again for openmediavault, I won't lose sleep over that. Right now, I have FreeNAS running on mirrored flash drives. I can simply remove one and power it back up and still retain FreeNAS until I am confident with OMV on the other flash drive. Obviously I do not trust running a large OS no matter the brand on a flash drive; it's only purpose has always been for the core boot environment. Let me know your thoughts please, thanks!
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
My preferred virtualisation platform is ESXi. It has the smallest footprint, the best of all support for any guest OS including storage virtualisation and you can setup initially or restore it with all VMs up again within minutes even without a backup of your VM/ESXi server. The free version is enough, paid editions add features for availability, management for multiple servers or support for Linux Docker containers as a VM.

So
- ESXi 6.5u1 free as base with management via Browser
- add a full featured ZFS NAS/SAN appliance as a VM with pass-through of disks - can be FreeNAS on BSD or my napp-it on Solarish. Do not add services ex Plex or you must care again about backup and restore, keep the storage VM as minimal as possible, see my Howto for Solarish, similar setup with any other storage VM http://napp-it.org/doc/downloads/napp-in-one.pdf

This offers way better support for ZFS and easy to manage advanced NAS/SAN features via Browser than any Linux solution, does not matter if you compare OMV or Proxmox. FreeNas, Nas4free, napp-it or Nexentastor are more than just ZFS. This is like Synology that is much more than just Linux + ext4.

- add VMs for your services, use the best OS for any of them, does not matter if this mean BSD, OSX, Linux, Solarish or Windows. Store them on your storage appliance with snapshot support.

No dependencies between them, everything fully independent. Even after a complete system crash, back online within minutes without thinking to much as this is only a install ESXi and VM import from ZFS. This is my basic rule. Keep the essential base like VM Platform and storage as small as possible and you must not care about disaster recovery. Never add complex services to either of them. VM Clone, copy, backup, versioning can be done via Windows/SMB and Previous Versions.
 
Last edited:

sth

Active Member
Oct 29, 2015
379
91
28
I'd second Gea, I lost confidence in IX a while back and moved to Napp-it on Omni and found performance was better but also that things that had previously been a PITA to configure like CIFS sharing where I was never quite sure it was right is now pretty much childs play. I'd also call out that Gea has been massively helpful in providing support on the few occasions Ive needed it as I got my head round some OmniOS things.
 
  • Like
Reactions: Patrick

jazon12

New Member
Aug 18, 2017
6
0
1
43
I don't worry about storing other VM's. I have an EMC SAN on a 4gb fiber structure with enough room for the VM's I need. Actually, my storage on FreeNAS right now would have a portion of it dedicated to backing up the SAN. But back to the ESXi thing...are you suggesting that I install the hypervisor on the R510 first, then install the storage handler and the Hyper-V as virtual machines? That would mean I would have to mess around with the ZFS storage I have and potentially lose files. I don't have a backup here, so I gotta try to keep hands off as much as possible. Plus, I've always been of the opinion that an important storage server should NEVER be virtualized, no matter the dependability of the hypervisor. Same as virtualizing your perimeter firewall, keep things like that simple. Besides, all I have available to me are 16GB flash drives to load a core boot from, and the R510 is maxed out on the drives...
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Your initial thread was about ZFS Storage and virtualisation or adding services like Plex ontop of FreeNAS. Your goal was "I need it to do two simple things: serve files and host a virtual machine. That's it." and barebone FreeNAS was not the solution

FreeNAS can serve files but is propably not the best base to host a virtual machine. This also adds complexity to your storage server what I would always avoid as this is a critical service.

The idea behind my All-In-One idea on Solarish that I introduced years ago and that is in the meantime also an accepted setup idea for BSD/FreeNAS is that you use the ESXi type-1 Hypervisor as base for any services, see FreeNAS 9.10 on VMware ESXi 6.0 Guide | b3n.org

You said "never virtualize a storage server". This is not a problem. The item is "never virtualize storage". It is not a problem to virtualize a storage OS like BSD or Solaris unless you can give it real storage access via pass-through. From storage OS view it has the same access to disk controllers or disks like on a barebone setup. If you virtualize for example your FreeNAS with pass-through of storage it can use your current pool with its own disk drivers directly.

If you want to add other VMs you use NFS as datastore. This can be your EMC but if you use a NFS share of the storage VM you have a better performance as all traffic is internal in software and no external dependencies.

With your R510, you can install ESXi on USB, it may also be possible to install the storage VM on USB. All other data is on ZFS. You may also use a PCI-e card with a small NVMe for ESXi and the storage VM.

What is critical. You system must support pass-through for storage hardware.
 
Last edited:

talsit

Member
Aug 8, 2013
112
20
18
I've done a lot of different things. Currently I'm on ESXi with an Ubuntu 16.04 VM that has ZFS installed from standard repositories. I've passed through an 9205-8e card that is attached to an external JBOD.

ESXi is installed to a 4GB SATA DOM, datastore is on a pair of 120GB SSDs.

The server took a power hit, I pulled the HBA and SSDs and moved them to an older server, installed ESXi to a thumb drive and was back online in 30 minutes.

First server was repaired, I pulled the HBA and SSDs and had zero issues restoring the system.

My only recommendation is to not assign the drives to the array using their designation (/dev/sda, /dev/sdb, etc) use the drive identifier (big string of letters and numbers) I lost an array in testing because the system designation for the drives changed on reboot. There is an easy way to do this by assigning them via designation then using a command to change to the identifier.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
ZFS use a disk list to find the pool members during normal operation. If you use controller port based numbers for disk enumeration like sda and switch the disks to new ports, ZFS will not find its disks.

You must then re-import the pool as during import, all disks are read to find pool members and the problem is fixed.

This is why systems like Solarish with current HBAs are using WWN detection of disks only. This is a unique number ex t50014EE204EC5FCC assigned by the disk manufacturer that remains the same over controllers and servers. WWN has a similar function like the MAC address on nics. WWN is also superiour to serial numbers as it has a defined structure. Serials are also sometimes not exactly the same when you use different tools to read them. On Linux you have all choices and the default is the trouble making sdx.
 
Last edited:
  • Like
Reactions: talsit

talsit

Member
Aug 8, 2013
112
20
18
Gea, thanks for that explanation.

Here is a link to what I did to set my zpool up by ID. These were the pools I talked about moving above.
 

jazon12

New Member
Aug 18, 2017
6
0
1
43
That's a great point Gea. The ESXi idea looks like a good idea if I had a different setup. Those NVMe devices are a good idea, but the price on them is just too rich for my blood. Ideally I would like to pull this off without having to buy something. Is there a way your napp-it could work for my situation? thanks!
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
The easiest part is ESXi. This is a type-1 Hypervisor, not a full featured OS like Debian. It is more like a firmware. You can boot it from an USB stick or another bootdevice like an Sata disk. It completely runs from RAM.

You then want to boot VMs. This can be done from a datastore on a local disk but the preferred way for ESXi is to use a SAN like approch via NFS or iSCSI. As you want to self-provide a virtualized SAN, you must place it locally, typically on a small Sata SSD. From size use 30GB or twice the RAM size as minimum.

If you have no bay left for an Sata ssd, you can use an Sata DOM ex Super Micro Computer, Inc. - SATA DOM Solutions or simply place a small cheap Sata SSD or 2,5" disk anywhere inside the server.

Put the storage VM onto this disk and boot it up. With napp-it you can download/use my ready to run ZFS server template. Now when a storage OS like OmniOS or FreeNas is running you want to give it access to disks. While this can be done via raw disk mapping (use single disks over the ESXi diskdriver) you want direct access to disk controllers and disks. This require an additional Raid controller or better HBA that you pass-through to the storage VM.

If your disks are currently connected to an extra controller (not Sata), just pass-through this controller and the storage VM has full and barebone like access to them.

Now just create a NFS share on ZFS and use it as an ESXi datastore for other VMs. So depending on your modell, you may only need an additional small Sata bootdisk for the storage VM. Care that the storage VM is the first VM to bootup as you require it for other VMs and for general storage.

Without an extra controller you may pass-through Sata but then you require a local datastore. This can be USB but this is not supported and a little more complicated., see ESXI / Napp-IT All In One with USB Datastore
 
Last edited: