Server 2016 vs FreeNAS ZFS for iSCSI Storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
I've been doing some testing with Windows Server 2016 (RTM) and FreeNAS and thought I would share some of my results. My use is purely a home/lab scenario and I'm not an expert on any of this and its quite possible that I did everything wrong, so take it for what its worth (not much).

I've been using FreeNAS for storage on my home/lab server but have never really liked it. While there are undeniable benefits of the underlying ZFS file system and volume manager, the FreeNAS implementation has always seemed like a bit of a hack relying on the old opensource code from before Oracle closed it. There's also a cult-like mentality of many of its users and their forums are... well... cult-like and they thumb their noses at all of all this new fangled virtualization stuff (and tell you to buy more RAM). So I decided to give Windows 2016 a try and see how they stacked up. Windows new file system, ReFS, was new in 2012 but has had some time to mature. It fills in many of the features that zfs has had but ntfs lacked such as check summing and scrubbing and was designed to be used for storing very large files such as virtual machines or datastores. They've also added a few key features that zfs does not have and are, at least on paper, very appealing.

The biggest change from 2012r2 is the introduction of storage spaces direct, which is basically Microsoft's version of VMware's vSAN and a move toward hyper-converged servers and software defined storage. Not really applicable for a single host home or lab, but some of the other upgrades are. One of the best improvements is with tiering. You can now have 3 tiers of storage: NVMe for caching + SSD for "hot" data + HDD for "cold" data. ZFS does caching with slog drives, but they don't do much to take advantage of mixing SSDs and HDD. You can also mix parity levels and tiers, e.g mirrored stripe for the ssd tier and parity for the hdd tier. That helps tremendously with write speeds. Another advantage over Freenas is the ability to easily add capacity. You can add disks to a zfs pool, but, crucially, zfs does nothing to re-balance the storage. For example, if you have 8 drives running mirrored pairs that start get full and you add another pair, because the other 8 drives were mostly full nearly all of the new writes will go to the new drives robbing you of the performance advantage you should get from using raid 10. Server 2016 will re-balance existing data across all drives. You can also empty out a drive prior to removing which could be useful.

The Server:
ESXi 6.0 u2
Intel CP2600 with Dual Xeon E5-2670
96GB RAM (20GB to FreeNAS)

The tests I did were all done from a server 2016 VM using iSCSI. The host is running esxi 6.0, freenas 9.10, windows server 2016. Freenas has an LSI-2008 HBA in IT mode passed through with 8 WD Re 3TB drives. It also has an Nytro F40 SSD as SLOG. Windows has 2x LSI-2008 HBA passed through. One has 8x Hitachi 7k3000 2TB drives, the other has 2x 960GB Toshiba HK3E2 SSD Drives. The HDD tier is parity, SSD tier is mirrored. In both cases I created a 8TB iSCSI volume and shared it back to ESXi. There is also a 9260-8i running Raid 5 with 4x 400GB Hitachi SSDs. (side note: I have to stay away from the great deals forum.)

EDIT: In order to mix tiers, you have to enable the Storage Spaces Direct which requires 3 hosts. So parity HDD + Mirrored SSD is not an option on a single host system.

For the file copy tests, I created a RAM disk to make sure that wouldn't be a bottleneck.


1500 MB to ZFS over iSCSI


5000MB to ZFS over iSCSI


File Copy to ZFS from RAM Disk


File Copy from ZFS to RAM Disk


5000 MB to ReFS over iSCSI


File Copy to ReFS from RAM Disk


File Copy from ReFS to RAM Disk


And for comparison sake, RAID 5 with 4x400GB Hitachi SSD


Here is parity only with no tiering


I'm not going to draw any conclusion other than to say that the results are decidedly mixed. I'm not quite ready to go all in with windows, but I'm not going to write it off either. I'm going to do more testing with tiers and see if Microsoft's parity raid is any better than it was in 2012r2 without the benefit of SSD caching and I'm also going to see if the 3rd NVMe tier does much in this use case.

Let me know if there are any other tests you'd like to see or questions about the setup.
 
Last edited:

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Silly question but is there still active development in the OpenZFS space? I remember the big hurrah when it kicked off a couple of years ago but I don't see much activity in github when I look at the logs.
 

manxam

Active Member
Jul 25, 2015
234
50
28
There's still a lot of development by the Illumos team (illumos-gate) and they upstream regularly:

Aug 16 to Sept 16:
Excluding merges, 28 authors have pushed 54 commits to master and 54 commits to all branches. On master, 483 files have changed and there have been 7,488 additions and 94,658 deletions.
 

Larson

Member
Nov 10, 2015
99
7
18
53
Thanks @ColPanic! Very helpful since I've been going back an forth between these two platforms myself for my own home setup.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
OpenZFS and storage around is very active despite the break with Oracle,
see Companies - OpenZFS

Not to forget the push of development to Open-ZFS due ZoL where ZFS and not btrfs seems to become the defacto next gen filesystem or Samsung who bought Joyent recently, one of the key developers behind Illumos, the free Solaris fork.
 
Last edited:

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Correct me if I'm wrong, but there hasn't been much development in the places where the freenas implementation is lacking. Specifically dedupe, encryption, expanding and rebalancing storage and taking advantage of cheap ssd drives.
 

wildchild

Active Member
Feb 4, 2014
389
57
28
Dedupe,encrytion and expansion is standard part of zfs, though since tigile was bought bought by oracle development has been slowed down.
Please do not make the mistake to think zfs is *bsd only.
There are also many developments on the illumos ( formerly known as opensolaris ) side of things
Cheap ssd are supported although not advised on zil.
Trim is by default av. On freebsd
 

manxam

Active Member
Jul 25, 2015
234
50
28
@ColPanic, do you happen to have any idea when the second part of your testing will take place? I'm extremely curious about parity without tier-ing.
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
@ColPanic, would you be able to give an explanation on how you setup mixed parity levels in storage spaces?
You just add all the disks (SSD and HDD) to a new pool, then when you create the virtual volume you set the size and parity level for each tier. It's pretty obvious in the wizard.

You can also do it all with powershell if that's your thing.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Interesting article. One nit though: slog has nothing to do with caching - it is a transaction log.
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
It looks like mixing parity levels (e.g. Mirrored for ssd tier and parity for the HDD tier) can only be used if you enable S2D. And to enable S2D you have to have to have 3 nodes in a cluster. There may be hacks to get around it but out of the box, this configuration is not supported. You also have to have the datacenter version of Windows which makes the whole thing cost prohibitive unless you have access to free server licenses.

I have no idea why they've limited it like this It's not engineering - the software can clearly do it. With the explosion of cheap SSDs this could be a great storage option for SOHO or SMB users looking for something more than freenas but don't need HA.

Back to freenas and iscsi for now.
 

manxam

Active Member
Jul 25, 2015
234
50
28
Thanks for the update ColPanic. I'm a little confused about your statement though as you said this in your original post:
Windows has 2x LSI-2008 HBA passed through. One has 8x Hitachi 7k3000 2TB drives, the other has 2x 960GB Toshiba HK3E2 SSD Drives. The HDD tier is parity, SSD tier is mirrored.
Yet above it appears you're saying that this cannot be done? Did you have a chance to measure a PARITY only HDD pool of any size for speed comparison to a ZFS pool w/o L2ARC or SLOG?

Thanks!