Is FreeBSD the best way to go?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
Greetings


I would like to build a ZFS based home server that does the following tasks:
File server generally
Backup 2 Windows desktops
DHCP server
Firewall
DNS caching
Provide remote access
Time machine back ups
Needs to run Sabnzbd
Store Mp3, Flac, Wav files and serve them to iTunes, SqueezeBox, JMRiver
Store photos and serve them to Adobe Lightroom or other Win or Mac program
Store movies and serve them to iTunes and Plex
May need to host a VM or two in order to achieve the above


Looking around FreeBSD (ZFS) seems to meet these needs. Although I have no Unix/Linux experience I am willing to put the time in.

Are these goals achieveable with FreeBSD?
Is there a better way? I am biased towards ZFS because of how robust it is but I am NOT knowledgeable enough about the various OS's ZFS may sit within.
 
Last edited:

sotech

Member
Jul 13, 2011
305
1
18
Australia
I personally favour OpenIndiana+ZFS - just a matter of taste, really, I like using OI more than FreeBSD. OI has napp-it, whereas FreeBSD has ZFSguru, if I recall correctly. I don't know anything about the latter but napp-it is excellent and is under constant development.

I would consider making an all-in-one system based around ESXi with various VMs to do the tasks you're listing - we have that sort of setup here (well, two of that setup) and they work brilliantly. I like keeping the systems segregated by purpose so as to prevent an issue with one affecting the others.

OI+Napp-it for file serving, photo storage/access for editing (all editing done over network in LR4/CS6) & Time Machine backups as well as movie/music storage
pfsense/untangle VM for firewall/DHCP
Ubuntu for various apps/wikis and a subsonic server, pointed to the music shares on the OI VM
WHS 2011 for caching Windows updates
Ubuntu apt-cache for caching Ubuntu updates
 
Last edited:

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
Thanks for the reply.
I have no experience with ESXi. I'm reasonably good with windows desktops. I'll do a little research on that.
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
Thanks for the reply.
I have no experience with ESXi. I'm reasonably good with windows desktops. I'll do a little research on that.
It's a hypervisor that runs VMs; you can install it to a USB stick inside your server (it loads into RAM and runs from there) and then install/run the virtual machines on your server from a Windows computer on your network. It's handy because it gives you a platform to build your VMs on that's hardware-independent; I can grab a VM from our server here and take it to one of our other servers in the city, transfer it and boot it up and despite the hardware differences everything is just peachy. Same goes for upgrading your hardware - no need to reinstall all of your VMs, just reinstall ESXi. That and it is handy powering off/on/rebooting the machines remotely without taking the rest of your system down. I don't think we've ever had to reboot ESXi due to a fault.
 

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
It's a hypervisor that runs VMs; you can install it to a USB stick inside your server (it loads into RAM and runs from there) and then install/run the virtual machines on your server from a Windows computer on your network. It's handy because it gives you a platform to build your VMs on that's hardware-independent; I can grab a VM from our server here and take it to one of our other servers in the city, transfer it and boot it up and despite the hardware differences everything is just peachy. Same goes for upgrading your hardware - no need to reinstall all of your VMs, just reinstall ESXi. That and it is handy powering off/on/rebooting the machines remotely without taking the rest of your system down. I don't think we've ever had to reboot ESXi due to a fault.
Did some reading at napp-it.org. Seems simple enough, fits my needs and the flexibility of multiple VMs means I can pick the best OS+application for each particular function. Thanks again for the tip.

Napp-org put a lot of emphasis on hardware selection. I would like to get some recommendations on H/ware that is oriented to this solution.

I figure I would like to have about 12GB+ of usable file storage space.
I want to use the deduplication function of ZFS
Realistically 3-4 music and/or video streams will occur simultaneously

Any suggestions? My budget is around 3K max and I would consider a phased increase in storage size to offset more expensive initial h/ware cost if required.

All help welcome!
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
First off - dedupe is next to useless, the performance hit is massive and unless you have VERY large amounts of RAM and a lot of identical data it's a waste of time. I've tried it in a range of different setups and the performance hit in each setup was, well, massive.

I take it you meant 12TB not GB?

Norco 4224 + server board (supermicro, tyan, intel), Xeon 1155 or 2011 CPU, 16 (1155) or 32GB (2011) ECC RAM, a couple of M1015 HBAs to match however many drives you have + whatever combination of 2TB and 3TB drives are cheapest at the time and you should come in well under your $3k budget.

If you have a 6 disk raidz2 you lose 2 drives to parity and as such you could have 2 drives fail and still have your array running. Say you were using 2TB drives; for each 6 disks you'd have 8TB of storage. Two of those and you'd be set with 16TB usable storage. In Australia you can pick up 2TB WD Green drives for ~$110 each, which means you would spend $1330 on disks. 4224 chassis would be about $500.. M1015s are ~$70-90 each, grab three and you have enough ports for 24 drives. After all that you probably have ~1000 to spend on hardware.

Easy to pick up a good quality Socket 1155 server board, CPU and RAM for that much - with change. 16GB of Kingston ECC RAM is $150-odd, a Xeon 1230+ V2 CPU ~$200+ and a good server board ~$250+. You're in luck, too - the V2 versions of all of those are just hitting the market now so you'll be buying the latest tech. Grab a Seasonic X-560 PSU or similar to power it all (how much power you need will depend on how many drives you're after). Also grab a low-powered 2.5" HDD or SSD for your first datastore - if you feel like splurging grab a pair for a mirror for redundancy. Put OI+napp-it on that datastore and then store all of your other VMs inside it. Grab a 120mm fan wall for the Norco (~$20?) and three good 120mm fans (Noctua NF-P12s are good) to keep noise down and drive temps in a good range.

I tend to build with Intel or Asus server/workstation boards as that's what we have ready access to here; in America and other countries you seem to be able to get Supermicro/Tyan/etc. boards easily and cheaply and they're highly recommended. I'd build with them if I could get hold of them for reasonable prices.

Also - don't forget to factor in some offsite backups of important data.

Our WD 2TB Green raidz2 arrays have more than enough performance to stream Bluray video to 2 locations at once while serving up photos to workstations and music to however many devices need it (subsonic server is accessed by phones/ipads/etc. on the LAN). ZFS uses your RAM to cache anything that's frequently accessed so give it plenty - that means that anything that's regularly accessed will sit in the RAM and be accessed virtually instantly, without having to go to the disks.
 
Last edited:

dswartz

Active Member
Jul 14, 2011
610
79
28
Another nice feature: if you need to change some config on a VM and/or update software, etc, you just take a snapshot of the VM. If the change is crap, you rollback to the snapshot, else delete (commit) it.
 

sim871

New Member
Apr 22, 2012
13
0
1
Bermuda
First off - dedupe is next to useless, the performance hit is massive and unless you have VERY large amounts of RAM and a lot of identical data it's a waste of time. I've tried it in a range of different setups and the performance hit in each setup was, well, massive.

I take it you meant 12TB not GB?

Norco 4224 + server board (supermicro, tyan, intel), Xeon 1155 or 2011 CPU, 16 (1155) or 32GB (2011) ECC RAM, a couple of M1015 HBAs to match however many drives you have + whatever combination of 2TB and 3TB drives are cheapest at the time and you should come in well under your $3k budget.

If you have a 6 disk raidz2 you lose 2 drives to parity and as such you could have 2 drives fail and still have your array running. Say you were using 2TB drives; for each 6 disks you'd have 8TB of storage. Two of those and you'd be set with 16TB usable storage. In Australia you can pick up 2TB WD Green drives for ~$110 each, which means you would spend $1330 on disks. 4224 chassis would be about $500.. M1015s are ~$70-90 each, grab three and you have enough ports for 24 drives. After all that you probably have ~1000 to spend on hardware.

Easy to pick up a good quality Socket 1155 server board, CPU and RAM for that much - with change. 16GB of Kingston ECC RAM is $150-odd, a Xeon 1230+ V2 CPU ~$200+ and a good server board ~$250+. You're in luck, too - the V2 versions of all of those are just hitting the market now so you'll be buying the latest tech. Grab a Seasonic X-560 PSU or similar to power it all (how much power you need will depend on how many drives you're after). Also grab a low-powered 2.5" HDD or SSD for your first datastore - if you feel like splurging grab a pair for a mirror for redundancy. Put OI+napp-it on that datastore and then store all of your other VMs inside it. Grab a 120mm fan wall for the Norco (~$20?) and three good 120mm fans (Noctua NF-P12s are good) to keep noise down and drive temps in a good range.

I tend to build with Intel or Asus server/workstation boards as that's what we have ready access to here; in America and other countries you seem to be able to get Supermicro/Tyan/etc. boards easily and cheaply and they're highly recommended. I'd build with them if I could get hold of them for reasonable prices.

Also - don't forget to factor in some offsite backups of important data.

Our WD 2TB Green raidz2 arrays have more than enough performance to stream Bluray video to 2 locations at once while serving up photos to workstations and music to however many devices need it (subsonic server is accessed by phones/ipads/etc. on the LAN). ZFS uses your RAM to cache anything that's frequently accessed so give it plenty - that means that anything that's regularly accessed will sit in the RAM and be accessed virtually instantly, without having to go to the disks.

Thank you for all your help! I'm just looking up all the gear you suggested and will post a H/ware list.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
if you really want to go hardcore you could rock freebsd madm and lvm. I think that's the way i will go - and forgoe the gui and webserver for a cli. Though the ZFS features that FreeNAS, OI, etc are compelling...
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
It's interesting how things change over time - we've left OI entirely for Ubuntu + ZFS on Linux and are much happier for it, given that we're all experienced with Ubuntu but only a couple of us have used Solaris/OI. It's a shame to lose napp-it, mind!
 

pgh5278

Active Member
Oct 25, 2012
479
130
43
Australia
Good Afternoon, can you detail a little of your change from OI to Ubuntu +ZFS, and reasons for more happiness, about to build and am keen to know your your driers and feeling on this..
Cheers From PGh in QLD..
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I have moved all of my ZFS storage to ZoL also, but, like sotech, I have a lot of Linux experience. To me it depends on what you are comfortable with, if coming from the Windows world, OI + Napp-It gets you a very nice UI for managing all things related to the NAS. ZoL is command line only as of now and SMB sharing is also not built into ZoL like it is in OI. I am building a NAS for a friend and I am going with OI+Napp-It for simplicity alone.
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
Familiarity with the underlying systems is pretty much the biggest reason - for me/us it's much easier to troubleshoot things on Ubuntu than Solaris/OI. Things like setting static IPs, using multiple static IPs with multiple NICs, teaming/bonding/trunking and the like are all more straightforward for us.

We also tend to use Ubuntu as the OS for most of our virtual machines - things like Subsonic, ZFS fileservers, wikis, blogs, office computers/workstations and the like all run quite happily on it and there's a pretty huge community of people developing interesting things for it, so being more familiar with it as an OS pays off for more than just the fileserver.

We have had no stability issues with ZFS on Ubuntu and it's not difficult to put together some basic webpages to show the status/health/temps of your drive pools - nothing like the power of napp-it as you're not going to be able to change things from a webpage but knowing the command line functions relating to ZFS is by no way a bad thing and will improve your confidence with managing it.

Having said all of that - like cactus, we're still using OI+napp-it for clients who aren't ever likely to want to change anything or use multiple NICs - napp-it is so easy and if nothing changes there should be minimal need to actually poke the underlying OS.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I tried ZoL on Debian (proxmox) for a couple of weeks and had to give it up. I had two different kernel panics, neither of which was resolved. Not blaming the devs - I understand they have schedules to meet and what-all. Who knows? It's possible the proxmox patches to the debian kernel were the issues. I'll probably revisit it at some point, but it's still a 0.x rc product, and my need is for a home/soho production use case.