Proxmox-ve 6.1 kvm based all in one Freenas guide

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Glen

New Member
Apr 27, 2020
4
1
3
I was wondering if anyone knew of an updated guide to:
https://www.servethehome.com/the-proxmox-ve-kvm-based-all-in-one-freenas/

that would be current for hardware and software in 2020.

I am wanting to build a home server like this but have not really built any thing like this before. I have built many gaming computers over the years, and I am an experienced Sysadmin but I have not ever been in a position where something like this is feasible cost wise.

I do not have a rack or really any place to put one in my apartment, so I am looking for something like a Silverstone mini-itx NAS case for my build, e.g. SilverStone CS280 INTRODUCTION

I am also planning to use a SuperMicro motherboard with SoC.

Since I have not done this before, I am not quite sure how much memory, storage, cooling I need.

My goals for this build are:
Backup storage for my other devices
VM's for Plex, a web server, random linux distros for use in home lab, minecraft server and other ideas as they come up.
 
  • Like
Reactions: Patrick

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
It's not that outdated - in fact, @Patrick built a TrueNAS machine based on the Silverstone CS280 and a Xeon-D 1500 series board less than 2 weeks ago, and TrueNAS is essentially the latest FreeNAS version under a different branding. There's not really that much changes on the software stack that is fundamentally different from the write-up 4 years ago other than the version numbers (It's Proxmox 6 instead of 4, TrueNAS core instead of FreeNAS 9, etc), and the hardware isn't too much different (NVMe drives still runs hot if it's used in a small chassis like the SYS5028, avoid WD since they sneaked shingled magnetic recording drives onto the NAS lines...(terrible practice), and the Xeon-D 1500 series are still considered okay for production (hopefully with the pricing depreciation of 4-5 years baked in, I am not going to spend 1500 on one...400 is more like it)).

The main gist from that write-up is that this all-in-one machine is really meant to be an evaluation/test machine and not "production ready". Patrick has one VM with 2 disks on passthrough for testing FreeNAS, another VM with 2 disks for testing ceph, and for Proxmox (the hypervisor hosting the VMs) he has 2 SSDs for hosting the Proxmov and the VM image. It's a little redundant (you can run Proxmox off a single SSD, and with good backups you don't need redundancies there), and it's not for production (your NAS should ideally be a NAS and little else, and it certainly should be more redundant than 2 disks in a RAID1 setup). The old adage of "just because you can doesn't mean you should" works well here.

Okay. You need to think about this in terms of "what I need to run", "how much CPU firepower/RAM/storage per VM", “how many VMs we need to run”, "how much redundancy do I want to pay for", and most importantly "how much noise can I tolerate from my lab", "how much power can I pass through the breaker before they pop" and "how much of my money is going to Coned/Comed/PECO/Entergy/HQ/HydroOne/EON/EdF/NatGrid/whoever runs the power grid in my area". You need to have a good idea before you plan for the hardware needs.

Here's a few freebies:

a) You can run a Minecraft server on a Raspberry Pi 3 or 4 easily
b) Most Linux distros stand-alone will be fine on 768MB of RAM, unless you plan to host enterprise Java apps on it - in which case, 2-8GB is typical per VM
c) Plex requires a robust CPU or GPU for transcoding/streaming. That's either Intel Quicksync or GPU offloading (SRIOV passthrough on nVidia for NVNEC, or AMD for VCE...)
d) If you are leaning towards FreeNAS, I would go at least raidz1 or z2 (RAID5 and 6 respectively) - preferably raidz2. That's 2 drive failure tolerance in an array.
e) The CS280 does not do 3.5" drive bays, so if you plan to use huge but slow hard drives for warm storage (stuff that doesn't get written to often)...well, you'll have to kludge up something.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I think the biggest change @Glen since 2015 is that Proxmox VE has pretty good ZFS support. ZFS on Linux has also gone from a fringe to almost mainstream to the point that Ubuntu 20.04 LTS has it in the desktop installer.

Perhaps the biggest question now is whether you even need FreeNAS. Proxmox VE now has a GUI for making ZFS arrays. The one big feature they are missing is exporting ZFS arrays as network shares which is much easier on FreeNAS.

The feedback here is wise. Perhaps this is highlighting a need to re-do some of these guides for 2020.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I think the biggest change @Glen since 2015 is that Proxmox VE has pretty good ZFS support. ZFS on Linux has also gone from a fringe to almost mainstream to the point that Ubuntu 20.04 LTS has it in the desktop installer.

Perhaps the biggest question now is whether you even need FreeNAS. Proxmox VE now has a GUI for making ZFS arrays. The one big feature they are missing is exporting ZFS arrays as network shares which is much easier on FreeNAS.

The feedback here is wise. Perhaps this is highlighting a need to re-do some of these guides for 2020.
Oh yeah, zfs mainstreaming is definitely a major difference. Gone are the days of me running a crappy little Dell 1U with Nexenta (OpenSolaris flavor) in 2012 just so I can drive an Infotrend iSCSI array for zfs pools at work.

Well, as for exporting zfs pools to network shares with Proxmox, this is really a question of what you want your hardware to do, and whether you are comfortable in terms of segregation of duties. For a one-node, do-it-all setup, yeah, I guess you might not need it - although if you are running functions like bacula/timemachine/SMB/nfs you really should segregate it in a VM or a jail. You don't want to accidentally share out the proxmox config directory or have this weird franken-proxmox setup that stops working on the next upgrade because Proxmox patched something unrelated and all of a sudden your shares stops working (oh no, they pushed out a sharlib that fixes ceph but breaks nfsd or smbd because no Proxmox dev thought anyone will use those daemons right on the hypervisor itself). Hacking proxmox to do all of that is just...*ugh*. I don't recommend it.

Plus you could always stand the problem on its head and run FreeNAS and host VMs using byhve (which is built-into FreeNAS)
 
Last edited:
  • Like
Reactions: Tha_14

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
For ZFS and NFS I agree completely - Proxmox provides enough to manage most used cases natively without FreeNAS.

Where it lacks compared to FreeNAS is in management of users, permissions and Samba shares, etc.

I think if you have a simple setup and are comfortable managing permissions and Samba on the Linux command line I'd just run Proxmox as is and avoid the extra layering of running FreeNAS inside it. In fact this is what I do. But there still remain plenty of good arguments to run FreeNAS in a VM.
 
  • Like
Reactions: sboesch

Glen

New Member
Apr 27, 2020
4
1
3
I really appreciate all the information provided. While this is not going to be for business production use, I do want it to be pretty stable and reliable. The freenas is mainly going to be used as a backup locations for several windows computers and a couple of mac laptops. it will also serve as the storage used by proxmox for my vm's. I plan to have a VM as a webserver for my fiance to use as a test server before she publishes to her domains. Second VM will probably be Nextcloud so we can keep our data some where besides google drive, dropbox, onedrive, etc. The third VM would probably be Plex, but I am not sure that would be right away as I am not fully familiar with all that is involved in setting Plex up. Other than those, I'll spin up a VM here and there for checking out linux distros. I do have more details on my hardware now, so was wondering if I am going overboard or not planning for enough system resources:

Silverstone CS280
Supermicro A2SDi-8C-HLN4F Motherboard
4 x Supermicro MEM-DR432L-SL02-ER24 32GB DDR4 2400 RDIMM Server Memory RAM(128GB)
Micron 2200 MTFDHBA256TCK-1AS1AABYY 256GB NVMe PCIe3.0x4 TLC M.2 22x80mm SSD(Where I will install Proxmox)
6 x 4 to 6 TB 2.5" NAS HDD, having trouble finding 2.5" HDDs
2 x 2TB 2.5" SSD
SilverStone Technology 450W SFX Form Factor 80 Plus Gold Full Modular Power Supply with +12V Single Rail, Active PFC (ST45SF-G)
 

mcllisms

New Member
Jun 5, 2020
3
0
1
But why you running freenas in a vm. I would suggest build another physical server for your freeNAS Nd use that as a datastore for your proxmox server.
I’m doing the same setup. I have 3 proxmox server build already. I’m working on the 4th machine which is going to be my freeNAS Storage unit.
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
I have been presenting ZFS datasets from Proxmox for a while now, it's pretty straight forward, and I would expect that they would be adding the feature to present zfs datasets and pools within their web UI in a future release. To present my storage via NFS, I installed the nfs-kernel-server package, then I share the dataset zfs set sharenfs=on pool_name/dataset_name. For SMB/CIFS shares, I installed the smb package, and edited the smb.conf.
server role = standalone server
create mask = 0777
directory mask = 0777
[share]
comment = root share
browseable = yes
path = /pool_name/dataset_name
guest ok = no
read only = no

Now you have a Proxmox NAS.
 
Last edited:

Glen

New Member
Apr 27, 2020
4
1
3
But why you running freenas in a vm. I would suggest build another physical server for your freeNAS Nd use that as a datastore for your proxmox server.
I’m doing the same setup. I have 3 proxmox server build already. I’m working on the 4th machine which is going to be my freeNAS Storage unit.
Truthfully it is due to lack of physical space for setting up multiple physical boxes.
 

Glen

New Member
Apr 27, 2020
4
1
3
I really appreciate all the information provided. While this is not going to be for business production use, I do want it to be pretty stable and reliable. The freenas is mainly going to be used as a backup locations for several windows computers and a couple of mac laptops. it will also serve as the storage used by proxmox for my vm's. I plan to have a VM as a webserver for my fiance to use as a test server before she publishes to her domains. Second VM will probably be Nextcloud so we can keep our data some where besides google drive, dropbox, onedrive, etc. The third VM would probably be Plex, but I am not sure that would be right away as I am not fully familiar with all that is involved in setting Plex up. Other than those, I'll spin up a VM here and there for checking out linux distros. I do have more details on my hardware now, so was wondering if I am going overboard or not planning for enough system resources:

Silverstone CS280
Supermicro A2SDi-8C-HLN4F Motherboard
4 x Supermicro MEM-DR432L-SL02-ER24 32GB DDR4 2400 RDIMM Server Memory RAM(128GB)
Micron 2200 MTFDHBA256TCK-1AS1AABYY 256GB NVMe PCIe3.0x4 TLC M.2 22x80mm SSD(Where I will install Proxmox)
6 x 4 to 6 TB 2.5" NAS HDD, having trouble finding 2.5" HDDs
2 x 2TB 2.5" SSD
SilverStone Technology 450W SFX Form Factor 80 Plus Gold Full Modular Power Supply with +12V Single Rail, Active PFC (ST45SF-G)
I have decided to make some changes to this build and find I am having difficulty finding good info on a few things.

First, as was mentioned earlier the Silverstone CS280 hotswap bays are 2.5" only and it is difficult to find drives larger than 2TB that will fit,
so I am thinking of using either the CHENBRO SR30169T2-250 or the SUPERMICRO CSE-721TQ-250B Black Mini-Tower Server Case instead.

Second, I would like to go with an AMD based system board, but having difficulty finding a Mini-ITX board that can handle 128GB+ of ram. Any
help would be very appreciated.

Third, I am paring back on the storage to just 4 x 4TB Seagate Ironwolf CRM drives and 2 x 2TB SSDs

Main reason for the changes is to try and lower the cost a bit and possibly get a bit more energy efficient.

Finally started finding some info. Here is what I have planned now: WishList

Am I way overshooting for my needs?
 
Last edited:

Wolvez

New Member
Apr 24, 2020
18
4
3
Am I way overshooting for my needs?
IMO yes. Depending on what you want to do with Plex you could easily run what you have posted with a 4 core CPU and 32GB of RAM. My old setup was Xeon 1230v3, 32GB ram, 4x400GB Intel S3700s in ZFS raid10, 4x4TB Seagate spinners in ZFS raid10. With that I ran these VMs:
pfSense
opnSense
docker
MariaDB
ELK stack
HAProxy
3x webservers
Proxmox Mail Gateway
Mail server
Sogo webmail
Nextcloud
plus 3 other 0.5GB VMs doing random stuff
and 4GB for ARC cache

Sometimes some of the VMs, especially the ELK VM or Proxmox itself would hit swap but since it was all on flash and lightly loaded I never noticed a performance hit. You may also want to plan for storage expansion in the future. I started out with 2x4TB mirrored and after about a year I went to 4x4TB ZFS raid10. That lasted about 5 years and I recently upgraded to 2x12TB drives. If you fill all 4 slots to start with, your only option to increase storage is to replace all 4 drives. It worked out well for me, others might have different ideas though. These are the drives I bought.

Also I went from running an ESXi/freeNAS 9 AIO to just running Proxmox because unless I gave freeNAS 16GB of ram it ran like a turd. I wasnt even able to get gigabit speeds to my cifs share. With Proxmox everything was faster while using less resources and reboots weren't such a pain.