Trying to decide: FreeNAS, Unraid, Rockstor, Proxmox, etc.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
I'm changing my home lab environment around a little and trying to get a bit more flexibility in my storage and hypervisor setup, and I would love some advice regarding which NAS operating system might be best for my environment.

Apologies in advance for the long post!

TLDR - Looking for a fast NAS operating system for storing lots of large virtual machine images
FreeNAS is probably the answer, but I thought I'd see what people thought first.

I currently have an all-in-one type of setup with ESXi hosting everything with HBA's passed through physically to an XPEnology (Synology) NAS storage and jack-of-all trades system that's running lots of dockers with cloud sync, plex, hydra, sabnzbd, sonarr, etc.
It then hosts a lot of firewalls, and networking VM's as I have a big lab environment for testing.

My thoughts are to split it up and have separate storage and hypervisors, and I'm trying to work with the hardware I already have while adding relatively inexpensive parts for more speed :)

I will probably switch hypervisors every now and again, but want the storage to be more static

Networking is 2 x Juniper EX2300-C switches so I have 4 x 10G SFP's, but 2 of those are used connecting the switches as they are at opposite ends of my house.

This is the hardware I have and my rough thoughts right now...

Storage system
Operating system is unknown, but I would like it to be flexible and fast
Hopefully I can get this to saturate 10G and provide VM storage to networked hypervisors
I could possibly host cloud sync, plex, hydra, sabnzbd, sonarr, etc. here
  • Silverstone CS380
  • X9SRL-F
  • E5-1660 v2 for fast single core speed
  • 64G ram
  • SSD boot drives from motherboard ports
  • NVME?
  • Mellanox CX3 connected to EX2300 at 10G
  • 1 x SAS 3008 HBA for 8 x 1TB SSD's
  • 1 x SAS 2008 HBA for random disks

Main Hypervisor
EVE-NG because it's like working in Visio :)
Hosting all sorts of Network and Security based virtual machines, and windows, linux etc.
  • NZXT H440
  • X9SRL-F
  • E5-2695 v2 for lots of cores
  • 128G ram
  • SSD/NVME local storage
  • iSCSI or NFS to NAS
  • Mellanox CX3 port 1 connected to EX2300 at 10G
  • Mellanox CX3 port 2 directly connected to my workstation at 40G

Perimeter Hypervisor
ESXi or Proxmox or whatever works really
Primarily hosting OPNsense firewall, Pi-hole, Honeypots, etc.
Definitely switching perimeter firewalls every now and again between opensource and Juniper, PaloAlto etc.
Xfinity is only 1G so I'm ok with using the onboard NIC's
  • Raijintek Metis
  • Asrock Rack E3C224D2I
  • E3-1265L v3
  • 16G ram (maxed)
  • SSD/NVME local storage
  • 2 x Intel i210
  • iSCSI or NFS to NAS, but it's going to be 1G speeds
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Your post reminds me of my own "home" adventure ;) of swapping setups, hardware, etc, to get the most fitting :)

I'm on revision 3? or 4? Over the last couple years myself, and I think I'm going to go with this, after a last couple weeks of tinkering and deciding between 'one for all' or split up even more... I've gone SPLIT UP even more :)


FreeNAS (E3-1241 V3)
- General \ All Purpose storage for home files (family pics, photos, backups etc)
- LAN Shares for Windows\Phone users
- Test out NextCloud on FreeNAS
-- FreeNAS will be where I first attempt to run something 'home' that must run 24\7
if it doesn't work, then off to ProxMox home AIO

UniFi Cloud Key Gen2 Plus
- Intel S3710 800GB SATA SSD
- Handles & Stores all camera footage
- Add external SSD or scripted auto-archive to FreeNAS

Proxmox - Home AIO
- NextCloud here if doesn't work so well on FreeNAS
- SSD for local VM\Container storage


For me the importance of keeping them online is cameras, home storage, then whatever else the home all in one does on Proxmox. This also makes the power requirement for cameras\security MUCH LESS without needing to keep a full E3 with HDD online, and allows me to mix-match and test new hypervisors by keeping it off my storage and not an 'all in one'. I'm currently using dedicated pfsense appliance, and was going to replace with upgraded Intel ATOM but am deciding if I may just go with something from Ubiquti to keep it simple and usable with the Cloud Key Gen2 and POE Switch I have coming.

Maybe my new plans help you think about yours :)
 
  • Like
Reactions: Samir

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Warning - this ended up longer then i wanted lol. Hope it helps.

I just went through a change up as well. Complete design flip going from 2 servers running similar specs and having FreeNas store VM to going to using local Raid card for VM storage and giving up some vmotion options.

I just got a used dell r730xd with dual e5-2680v3 and 256GB Ram. It has a h730p raid card onboard. I also keep my dell r720 server which had the memory recently upgraded to 256 DDR3 1866 Ram.

Dell r720 has 2 lsi 12GB HBA cards. one external that connects to a SM JBOD 16bay case and the other an internal that connects to dell r730 8 bay sas3 cage i added to the dell r720. Originally I had two FreeNas VMs running on that server. FreeNas 1 - connected to external LSI card and ran my storage pool. 8x8TB 3.5HDD and a few smaller drives. FreeNas 2 ran my storage for VMs, it had 8 x800GB Sas3 SSD drives in r10 setup. I used optane 900p 280GB for slog for the pool. Both FreeNas servers had 64GB ram.

After getting the Dell R730, I was able to get better performance using the h730p raid card with my sas3 SSD drives then on the FreeNas server. Read speeds were about the same between FreeNas and the raid card, until the 2GB cache ran out on the card then it would drop.
So read would be around 6000 MB/s with cache and 3000 without cache on the raid card. FreeNas pretty much keep up on reads as i had plenty of Ram 64GB.

Writes are where FreeNas lost out to raid card. FreeNas with the optane could handle about 500 MB/s when the pool was set to sync=always which is important for VMs. The raid card just passed the SSD speeds out so it be around 2500 MB/s.

So what i decided to do with current setup.

1. I made the dell r720 system my freenas for storage, and running my Firewall software(pfsense and sophos) and a few other key VMs(windows server essentials for backup/storage and windows server for backup dns/ad).
I gave Freenas 10 CPU and 96GB Ram. I removed the optane drive from the server and decided to use some High write endurance 100GB SAS2 SSD drives for slog for my various storage pools. I have two ssd drives in stripe per pool so i get close to 300-400 MB/s write speeds on the pools which is plenty for storage to 8TB drives. FreeNas connects to other server using 10GB connection so i can maintain the write and saturate the read speeds. Also since my windows server is local on the same esxi host to the storage pool i can get higher then 10GB speeds with virtual network. IE 1400read/650write.

2. I made my r730xd system primary server for development, external hosted vms, workstation vm, etc. I setup one raid 10 pool using 4x 1TB Sata SSD for my slower dev environment and i took the 8x800GB sas3 drives from my freeNas 2 server and created r10 for my workstation VMs and external Vms.

Some pro vs cons with new design vs old.

1. Con
Lost Live VMotion. When the VMs were on FreeNas I could easily move the compute resource to different server. I also lost live vmotion since there are differences between v2 and v3 CPU of the two servers. So if i want to move a VM, I need to shut it down, migrate to different server plus different storage. when i need to upgrade r720, i will lost internet since i cant easily move the VMs using Live vMotion.

2. Pro
Servers are using a lot less electricity. I'm saving at-least over 100W with the new system/setup. The server run is also running less heat and noise due to upgrades. Setup is a lot easier, i dont need to manage two FreeNas VMs and when bring down the FreeNas host, i wouldnt need to move all the VMs off of FreeNas storage to local storage. Now I can shutdown the r730 without losing the network/freenas since it running on r720.

So for me, FreeNas for mass storage is definitely worth it. I love having storage pass to windows using iscsi and to ESXI hosts using NFS over 10GB connection. Great for storage of files, backups, etc.
FreeNas for Vms was a bit of mix. I just didnt feel like the SSD speeds came out with FreeNas, especially for writes. And even with optane, writes speeds werent that high. If i ran sync=disabled then speeds ran high, but risk would be corrupted VMs, and when i speed weeks building out dev systems, i just dont want to risk it.

But your use case might be OK with running FreeNas for VM storage and using fast slog to increase write speeds. If you plan to use HDD for storage and need more read speeds, then FreeNas would probably be a better fit then a raid card. Especially if you have lots of Ram for the host.
 
  • Like
Reactions: Samir

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
Thank you for everyone's input here. I really appreciate it :D

I'm still waiting on some hardware, but I have played around with the base system a bit already and did some very basic testing.
I'd like to expand my testing but I can't seem to get the Intel NAS testing software working, but will keep trying.

In the mean time, I installed each NAS OS, and then setup a simple SMB share using a 1tb HP EX920 running off a PCIE card.
Direct connected my workstation to the NAS through a Mellanox CX3 at 40Gbps link speed.
Mapped the drive in windows, then ran NAS Performance tester out of a ram drive and used 1000Mb test files and 40 x loops

FreeNAS
Average (Write): 1201.12 MB/sec
Average (Read): 1164.78 MB/sec

RockStor
Average (Write): 1380.72 MB/sec
Average (Read): 1127.32 MB/sec

UnRaid
Average (Write): 763.77 MB/sec
Average (Read): 875.09 MB/sec

Like I said it's really basic testing, but it's interesting to see where they land in the default config with a simple SMB share on a fast NVME drive.
 
  • Like
Reactions: Samir

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
What do you want to check?

OS? (Linux vs the Unix options like Free-BSD or Solarish) and service/filesystem integration. Quality of drivers on different OS?

Filesystem? ZFS has the best datasecurity and feature set, ext4 is faster especially with less RAM. Realtime raid scale performance with number of datadisks while nonraid systems always give single disk performance.

Performance of SMB server (ex SAMBA vs the multithreaded ZFS inkernel Solarish server)

Single or Multuser, sequential or random?

Impact of a GUI ex FreeNAS vs a lightweight XigmaNAS (both Free-BSD) or a full featured Linux vs a minimal OmniOS that is feature complete for iSCSI/FC, NFS and SMB without 3rd party services or applications?
 
Last edited:

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
What do you want to check?

OS? (Linux vs the Unix options like Free-BSD or Solarish) and service/filesystem integration. Quality of drivers on different OS?

Filesystem? ZFS has the best datasecurity and feature set, ext4 is faster especially with less RAM. Realtime raid scale performance with number of datadisks while nonraid systems always give single disk performance.

Performance of SMB server (ex SAMBA vs the multithreaded ZFS inkernel Solarish server)

Single or Multuser, sequential or random?

Impact of a GUI ex FreeNAS vs a lightweight XigmaNAS (both Free-BSD) or a full featured Linux vs a minimal OmniOS that is feature complete for iSCSI/FC, NFS and SMB without 3rd party services or applications?
That's a lot of really good questions

Use case is really simple - it's my home NAS

I run a lot of firewall and networking virtual machines and am constantly installing and testing etc. And want fast data stores for vmdk's and qcow2's.

I just want fast performance to make my day job less painful
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
From data security, performance and price, you can optimize only two, the third is then dependent. You must also decide if you want pure storage or a certain OS or a feature is a must have.

First decide between security, performance and price, then OS and extras. If data security is first, use ZFS, nothing else. For performance hardware is relevant, especially RAM for caching. Special features can be filesystem based encryption or a special non storage tools like media or photo tools.

From OS you can use a regular enterprise OS ex Solaris or a special/limited NAS distribution. For some regular operating systems you can add a GUI like I do with napp-it for Solaris, OpenIndiana or OmniOS. Some like OMV use a quite regular Linux.

For VM use, you may use ESXi and virtualise a webmanaged ZFS storage appliance like OmniOS or a Free-BSD based one like FreeNAS that follows my 10 years old idea, see my howto https://napp-it.org/doc/downloads/napp-in-one.pdf
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
What kind of network is that? 3GB/s transfers with FreeNAS!
It's really just a limited test to see how fast a single NVME share might be :D

Windows 10 running Intel NASPT
Intel 660P NVME

FreeNAS - latest
Single HP EX920 NVME
Standard install, single pool/dataset, no cache, etc

They are directly connected with a Mellanox ConnectX 3 in ethernet mode, at 40G, using a DAC cable.
 
  • Like
Reactions: Samir

Marsh

Moderator
May 12, 2013
2,644
1,496
113
I run a lot of firewall and networking virtual machines and am constantly installing and testing etc. And want fast data stores for vmdk's and qcow2's.
Won't some local nvme SSD faster than NAS.
Heck may be even raid 0 since you do not care about the VMs.
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
Won't some local nvme SSD faster than NAS.
Heck may be even raid 0 since you do not care about the VMs.
I'll definitely have local NVME on my hypervisor systems, but still want some fast remote storage so that it's easier to share VM's etc. :D
 

LaMerk

Member
Jun 13, 2017
38
7
8
33
You can try StarWind VSAN as a solution for compute and storage scenario. StarWind present storage as iSCSI targets to the hypervisors. They have versions for Windows and for ESXi and provide the really good performance of HA storage. The free version is also available so you can do a test and see the results.
 

Zamana

New Member
Oct 22, 2019
3
1
1
A little bit old thread, but I'm at the same boat right now.

After using Proxmox for more than 6 months and then FreeNAS for the same time, I finally understood the difference between them and for what they are good and/or bad.

1) FreeNAS

1.1) Advantages

a) I consider FreeNAS a "install & go" NAS, in the sense that they provide most of the services that you need, and it's great for storage.

b) It behaviors like a real appliance, in the sense that you do almost nothing at the host level. This is very well aligned with the FreeBSD approach, which really separates the "core" UNIX system from the rest.

c) Jails, the container technology, is great. Very easy to use and maintain, almost transparently to the system, most like the Solaris' zones.

1.2) Disadvantages

a) On the other hand, FreeNAS has a very limited virtualization capability. Really, I consider bhyve one of the worst virtualization platform that exists right now.


2) Proxmox

2.1) Advantages

a) Proxmox is primarily a virtualization platform, so you need to build your own NAS from the ground. This can be an advantage if you know
and want to build everything from scratch, or not. YMMV.

b) Proxmox is better than FreeNAS for virtualization due to the use of KVM, which seems to be much more flexible.

c) LXC, the container technology, is fine, but the use of LVM is a downside for me, mainly when you don't plan well and start to got short on "disk" space (or stays with lots of free space for nothing...)

22.) Disadvantages

a) It doesn't behavior like a real appliance, so you need to discipline yourself to not screw up the system.


So, right now, after using both, I'm still wondering if I'll keep FreeNAS or went back to Proxmox. I like the FreeBSD approach (and the native ZFS support), but sometimes the lack of a good virtualization platform makes the difference.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
Zamana.. just an FYI but proxmox supports zfs now and well.
Yes the GUI is limited but com and line is not too bad for most use.
I am currently running a proxmox setup with full zfs. Including custom zfs pools , not part of os. Works well for my needs.

I run emby and jellyfin lxc, a jdownloader lxc , and plan to add file server and other media related containers.
So far I like it.
 

Zamana

New Member
Oct 22, 2019
3
1
1
Zamana.. just an FYI but proxmox supports zfs now and well.
Yes the GUI is limited but com and line is not too bad for most use.
I am currently running a proxmox setup with full zfs. Including custom zfs pools , not part of os. Works well for my needs.

I run emby and jellyfin lxc, a jdownloader lxc , and plan to add file server and other media related containers.
So far I like it.
Oh, yes. I'm sorry for not had been so clear with my words...

What I meant by "native" regarding ZFS, is the fact that, due to license restrictions, ZFS is "integrated" into FreeBSD, contrary to the Linux, where it is a kernel module.

But of course that Proxmox come with ZFS ready to use, including the possibility to use it in the root filesystem.

Thanks for brought this to my attention.

Regards.