FreeNAS for ZFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Anyone using FreeNAS 8 with ZFS? Seems like everyone here is looking at OpenIndiana.
 

OBasel

Active Member
Dec 28, 2010
494
62
28
i have been using FreeNAS for a while
1,000,000 better interface than napp-it but suuuuuper slow iscsi

if you are using iscsi go oi or something solaris based
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I haven't talked to sub.mesa (from ZFSguru) in a while now. I don't think he had anyone helping so there is a good chance he got time or desire crunched.
 

NASCake

New Member
Jun 14, 2012
11
0
0
+1 FreeNAS. Interface is clean, lots of hardware support and the community is on top of things.

~NC
 

zicoz

Member
Jan 7, 2011
140
0
16
I am considering going back to FreeNAS and ZFS in my current build after the terrible performance I'm getting with Storage Spaces.

But what "raid" level should I consider? I have close to 40 disks in a mix of 2TB and 3TB (mostly 2TB).

Does ZFS has a "storage pool" level so that all the disks are pooled without being "raid 0"?

Or should I consider "Raid5"/"Raid6"?

I guess the big downside with Raid 6 is that I have to buy a lot of disks reach time I need to expand, while with Raid 5 it's limited to 3/4 at the time.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
With that many disks, you want the equivalent of raid50 or raid60. I wouldn't mix disks of different sizes unless you have just a couple that are 3TB. In any event, I would suggest something like raidz2. e.g. raid60. Have 4 stripes of 10 raidz2 vdevs. Gives you 32 disks worth of usable space.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Not sure what you meant by 'needing a lot of disks for raid6'. AFAIK, the only diff is 1 more parity disk. With ZFS you can expand by one vdev at a time.
 

zicoz

Member
Jan 7, 2011
140
0
16
Thanks you for the tip. I guess that I would have to get 10 drives every time I wanted to expand then (8 striped for storage and 2 for parity), which is too much for me, so I guess I'll stick with Windows and one of the pool-sollutions there.


By "needing alot of disks each time" I'm thinking that if I do a "raid5" then I'd go for 3+1, but if I went for "raid6" I wouldn't do a 3+2, I would atleast do a 4+2 which would force me to buy 2 more disks per batch, and with my current disk selection that would be about $450 extra every time.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
If you're willing to live with losing 1/2 your space, make a raid10 - then you can add two at a time.
 

Tim

Member
Nov 7, 2012
105
6
18
I'm a command line fan myself, but I do like good webinterfaces on services to either look for status or "bad things".
So I guess that the higher number of HDD's in the ZFS setup the more you'll want a graphical interface to work with.
Because I don't see the need for a GUI to setup ZFS in the beginning (unless you're on multiple pools with tons of HDD's and some complicated setup from day one).

For me, ZFS is mainly a setup and forget.
Just watch the logs for errors and the SMART info on the HDD's and once in a while expand the system with more/new HDD's.

Or is a web interface of some sort the "new oldschool" of how to do things these days?

My experience with ZFS is on FreeBSD 9.0 at the moment (no other interface then the command line from ssh)
And I'm planning to try Solaris 11 as soon as I've got the time.
The only other service that I run on the ZFS based machine (virtual) is NFS (heard that this might be easier/better on solaris 11 then FreeBSD 9.0)
I haven't got the NFS in ZFS up and running on FreeBSD, had to disable it in ZFS and use the NFS service in FreeBSD instead.
On Solaris 11 I've heard that you only need to do it in ZFS, but don't quote me on it as I'm not sure (for all I know it triggers the NFS service in the OS or something)


@JEggs101
Don't know what you're trying to say with "not cheap".
It requires a good amount of RAM, you need to read up on how to use it (command line, or the choosen graphical userinterface).
And based on hardware setup (raid or not) the cost will go up.
But this is not unique for ZFS.
So I don't get the "not cheap" comment as every other FS has the same costs (in addition to other licenses).
 

Artzig

New Member
Dec 15, 2012
5
0
0
I'm new to NAS and also ZFS. But learning fast. The freeNAS manual is actually a good read, it's written in a straight up style and somebody has done a good job on it.

Re the configuration. ZFS has a sweet spot of 3-9 disks per virtual device (or vdev as ZFS calls them). The RAID calculator on the STH front page will let you play round with configurations and sizes. The ZFS wiki is a good read.
So is this:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
and this
http://wiki.freebsd.org/ZFSTuningGuide

I set up a trial FreeNAS 8.3.0. system on an aged DV845 chipset board with a 30GB drive, and an even older HDD as a system drive (a 4Gb USB stick is optimal) , with 768MB of PC100 RAM and a 1.6GHz P4. It worked very well. It totally outperformed a QNAP TS110 NAS and was easier to setup. So that was it! New system to do it justice currently in construction. All I did was load in the CD, hit go, configure the network card at the command line as its static IP here, and then log into the web interface.

Not sure yet if the system will report SMART stuff, I think thats a line in a config file somewhere. And I think I'm correct in saying that if you loose a vdev you loose the pool, so bear that in mind for the underlying structure of the system and the way you share out the drives.
 

zicoz

Member
Jan 7, 2011
140
0
16
Ok, decided to go with Freenas and RaidZ2 after all.

Landed on 8 disk "raids" 6+2 since it matches up the best with serverchassis as they are 4 drives in the width.

Just two things. I have created my first "raid" and a volume with some shares on top of it, but the way FreeNAS presents it, it seems that I won't get to add other "raids" to that volume, that's wrong right? I can add another 8 disk "raid" and have the existing shares use that space as well right?

And how can I change the hostname from "freenas" to "server"? I tried doing it under "Network" -> "Global Configuration" -> "hostname" but it doesn't seem to have any effect. It's still listed as "freenas" in Windows Explorer.
 
Last edited:

zicoz

Member
Jan 7, 2011
140
0
16
Ok, I'm now testing ZFS and FreeNAS, currently have 16 Spinpoint 204UI drives spread across 2 RaidZ2 vdevs, and I seem to be maxing out the 1Gbit connection when writing/reading from it which is good.

I tried running "dd if=/dev/zero of=testfile bs=1024 count=50000" to test the local read/write of the pool, but I keep getting "dd: testfile: Read-only file system"

Do I need to make some changes from the default setup to make this work?

Also, reading the wiki I came across this bug: http://www.freebsd.org/cgi/query-pr.cgi?pr=134491

I would really love to have some hot-spares, am I better off going for Solaris or Open Indiana?

Also, your warning against using a mix of 2TB and 3TB drives are only for within the same Vdev right? There is no problem having 2 Vdevs with 2TB drives and then add another with 3TB drives?





edit: managed to get the benchmarks work with

Write:

dd if=/dev/zero of=/mnt/StoragePool/tmp.dat bs=2048k count=50k

And I guess the performance isn't too bad at 244.38996 MB/s

As for read I had to use:

dd if=/mnt/StoragePool/tmp.dat of=/dev/null bs=2048k count=50k

Which gave me a result of 302.002193 MB/s
 
Last edited:

Thatguy

New Member
Dec 30, 2012
45
0
0
I was trying FreeNAS w/ a controller passed through to it under vmware (pita) and I seemed to be getting about 3-400mB/s for read/write on my 16x3TB Raidz3.

I switched over to SmartOS, and am getting an extra 1-200mB/s on reads and writes.

Hot spares are great to have. That bug you linked is for FreeBSD 7, And there is some hinting about it being fixed for FreeBSD-8 (looks like a bunch of work to backport) and a recent copy of FreeNAS should run FreeBSD 8.3-RELEASE. I am uncertain if that hotspare bug is present in the current version of FreeNAS.
 

zicoz

Member
Jan 7, 2011
140
0
16
According to the FreeNAS forum it won't be fixed untill FreeNAS 9 :/ So I might go with Solaris or OI instead.
 
Last edited: