Minimal FreeNAS build for off-site backup

Jeremy Lea

New Member
Apr 21, 2017
Davis, CA
Hi all,

I wanted to share a post about a minimal FreeNAS box that I built a few months ago. It has been working really well for me, although I'm not sure I would recommend this as a production build (see the details below).

Some background: I am very experienced with FreeBSD (and computers in general), but wanted to get a feel for FreeNAS. Because of the constraints of FreeNAS 9.3, I chose to use vanilla FreeBSD for my all flash array machine for VMware, but needed an off-site backup machine for various backups of backups, without spending a ton of money. We are also in the process of replacing all of our desktop HDDs with SSDs, and also relieving people of their external backup disks (which was often their only actual copy of their data - so not really a backup...), so I had a stack of old desktop 3.5in disks of various ages and capacities, that I wanted to use. In addition, when you build yourself a really nice box, you don't actually get much experience with admin tasks - because it just works. So I decided to build a low cost FreeNAS server using all of those disks, knowing that I would have a lot of disk failures.

At the time there was a special on a Levono ThinkServer TS140 with 3.4 GHz i3-4130 and 4GB of RAM for $229, so I used that, added 16GB of additional RAM, 4 Sans Digital TR4M+BNC boxes and a Sans Digital HA-DAT-4ESPCIE 4 x eSATA card, and 2 x 16GB USB disks for a mirrored FreeNAS boot disk. Total cost (without disks) was ~$1000. I've seen similar machines on similar specials recently.


Into this I threw 10 x 1TB disks of various models, in a 2 x 5 disk RAID-Z2 config, with one disk of each raid in each little expander box, and the last disk of the five internal to the server (where they are a little tricky to hot swap, but it can still be done). I put 2 x 2TB disks into a mirror in two separate TR4M boxes and then put a bunch of random disks into the remaining 6 slots in a RAID-Z2. That gave me three pools: 6TB (usable), 2TB and another (which is now 4TB, since the smallest disk is now 500GB). Mostly, these were based on the disks I had around. With another mix of disks I would probably have done something different... The disks are shared via SMB and NFS (to VMware).

Anyway, this has been running for 18 months or so, and (as hoped), I've had a bunch of disk failures, and gained lots of experience with FreeNAS and replacing disks. Despite some fairly severe combinations of failures, I've not lost any data yet. I've also experimentally done some nasty things to the one VM I have running in ESXi using NFS to this machine, like filling up the backing disk, rebooting the server, etc. Even though the NFS share is set to async in ZFS, I have had no data issues. As I said, all of the data is backups of backups, but it is data I care about (i.e. not torrents), so I care enough to push through and fix issues, but I'm not going to loose anything if it all comes crashing down - a great environment to learn in.

The box is easily capable of saturating the 1GB connection to the undisclosed remote location. The 10 x 1TB array has a sustained read speed of about ~280MB/s, which by SSD standards is not great, but is fine for what I'm doing.

Some lessons I learned along the way:
1. When I first installed FreeNAS the 16GB of RAM hadn't arrived yet, so I was running with 4GB, causing swapping, as you can imagine. One of the disks failed almost immediately, and although the pool was fine the server crashed because of the poor out the box decisions from FreeNAS on swap. By default it puts two GB of swap at the start of every disk (and encrypts it for some reason). I replaced all of the swap partitions with gmirror'ed partitions, staggering the mirrors across the external boxes (so slot 0 of pmp0 with slot 1 of pmp1, slot 0 of pmp1 with slot 1 of pmp2, etc). I only left the swap on the ten 1TB disks (10 GB of swap is plenty), and turned off swap on the rest. As a result I have to rebuild the gmirror devices by hand when I have to replace one of these disks, but it is much better that way...
2. The eSATA port-multipliers work well - when there is nothing wrong... When a disk starts to go bad and return errors, all 4 slots on the cable start misbehaving. Because of the layout of the disks, this is not a major problem, but it does mean that you need to do a little sleuthing to make sure that you replace the right disk, not just the one that is reporting errors because it is being used the most.
3. The eSATA port-multipliers do not like SMART tests, especially not at the rapid frequency FreeNAS runs them by default. The SMART data generally does not tell you much, so just switch it off.
4. Don't put your system dataset on the USB boot drives... I quickly wore out the first set ;)
5. The CPU and memory requirements of FreeNAS are generally overstated in the forums. On this box the CPU is never very busy (nor on my real FreeBSD machine - even when it is doing ~2.0GB/s over iSCSI), and the performance of the disks is not that dependent on the amount of ARC used. This might just be my use case... Most of the datasets are LZ4 compressed and even this does not add much load.

I don't run any jails on this machine, mostly because I don't need to. In the future I will likely add a bhyve VM running Windows Server, running a DFS-R mirror of my actual NAS store, but I would need to evaluate that for stability. I will probably also had some hot-swap 2.5in disk bays in the 5.25in slots, since I've now developed a decent collection of 2.5in disks from laptops. The machine still has some open PCIe slots and some spare on-board SATA ports. The main problem is that it doesn't have much of a power supply.

Hope this helps someone.


Staff member
Dec 21, 2010
Hi @Jeremy Lea

Great post. Very detailed.

On the USB system drives, I usually go with 32GB in mirrored and have yet to see an issue.

I am going to move it to our DIY forum where we usually maintain these posts.


Well-Known Member
Dec 24, 2016
@Jeremy Lea I agree on the CPU utilization for FreeNAS. In an all in one setup with 2 cores assigned (12 4TB drives in 2xRaidZ2), Ive not seen it go beyond 60% while doing sustained transfers or while rebuilding after i mistakenly pulled the wrong disks. But again the situation may be different for heavy I/O workloads. Dont know how to see how much of the RAM it is actually utilizing. In my case I did not see any noticeable difference between 8GB and 16GB.