I've been doing some testing with Windows Server 2016 (RTM) and FreeNAS and thought I would share some of my results. My use is purely a home/lab scenario and I'm not an expert on any of this and its quite possible that I did everything wrong, so take it for what its worth (not much).
I've been using FreeNAS for storage on my home/lab server but have never really liked it. While there are undeniable benefits of the underlying ZFS file system and volume manager, the FreeNAS implementation has always seemed like a bit of a hack relying on the old opensource code from before Oracle closed it. There's also a cult-like mentality of many of its users and their forums are... well... cult-like and they thumb their noses at all of all this new fangled virtualization stuff (and tell you to buy more RAM). So I decided to give Windows 2016 a try and see how they stacked up. Windows new file system, ReFS, was new in 2012 but has had some time to mature. It fills in many of the features that zfs has had but ntfs lacked such as check summing and scrubbing and was designed to be used for storing very large files such as virtual machines or datastores. They've also added a few key features that zfs does not have and are, at least on paper, very appealing.
The biggest change from 2012r2 is the introduction of storage spaces direct, which is basically Microsoft's version of VMware's vSAN and a move toward hyper-converged servers and software defined storage. Not really applicable for a single host home or lab, but some of the other upgrades are. One of the best improvements is with tiering. You can now have 3 tiers of storage: NVMe for caching + SSD for "hot" data + HDD for "cold" data. ZFS does caching with slog drives, but they don't do much to take advantage of mixing SSDs and HDD. You can also mix parity levels and tiers, e.g mirrored stripe for the ssd tier and parity for the hdd tier. That helps tremendously with write speeds. Another advantage over Freenas is the ability to easily add capacity. You can add disks to a zfs pool, but, crucially, zfs does nothing to re-balance the storage. For example, if you have 8 drives running mirrored pairs that start get full and you add another pair, because the other 8 drives were mostly full nearly all of the new writes will go to the new drives robbing you of the performance advantage you should get from using raid 10. Server 2016 will re-balance existing data across all drives. You can also empty out a drive prior to removing which could be useful.
The Server:
ESXi 6.0 u2
Intel CP2600 with Dual Xeon E5-2670
96GB RAM (20GB to FreeNAS)
The tests I did were all done from a server 2016 VM using iSCSI. The host is running esxi 6.0, freenas 9.10, windows server 2016. Freenas has an LSI-2008 HBA in IT mode passed through with 8 WD Re 3TB drives. It also has an Nytro F40 SSD as SLOG. Windows has 2x LSI-2008 HBA passed through. One has 8x Hitachi 7k3000 2TB drives, the other has 2x 960GB Toshiba HK3E2 SSD Drives. The HDD tier is parity, SSD tier is mirrored. In both cases I created a 8TB iSCSI volume and shared it back to ESXi. There is also a 9260-8i running Raid 5 with 4x 400GB Hitachi SSDs. (side note: I have to stay away from the great deals forum.)
EDIT: In order to mix tiers, you have to enable the Storage Spaces Direct which requires 3 hosts. So parity HDD + Mirrored SSD is not an option on a single host system.
For the file copy tests, I created a RAM disk to make sure that wouldn't be a bottleneck.
1500 MB to ZFS over iSCSI
5000MB to ZFS over iSCSI
File Copy to ZFS from RAM Disk
File Copy from ZFS to RAM Disk
5000 MB to ReFS over iSCSI
File Copy to ReFS from RAM Disk
File Copy from ReFS to RAM Disk
And for comparison sake, RAID 5 with 4x400GB Hitachi SSD
Here is parity only with no tiering
I'm not going to draw any conclusion other than to say that the results are decidedly mixed. I'm not quite ready to go all in with windows, but I'm not going to write it off either. I'm going to do more testing with tiers and see if Microsoft's parity raid is any better than it was in 2012r2 without the benefit of SSD caching and I'm also going to see if the 3rd NVMe tier does much in this use case.
Let me know if there are any other tests you'd like to see or questions about the setup.
I've been using FreeNAS for storage on my home/lab server but have never really liked it. While there are undeniable benefits of the underlying ZFS file system and volume manager, the FreeNAS implementation has always seemed like a bit of a hack relying on the old opensource code from before Oracle closed it. There's also a cult-like mentality of many of its users and their forums are... well... cult-like and they thumb their noses at all of all this new fangled virtualization stuff (and tell you to buy more RAM). So I decided to give Windows 2016 a try and see how they stacked up. Windows new file system, ReFS, was new in 2012 but has had some time to mature. It fills in many of the features that zfs has had but ntfs lacked such as check summing and scrubbing and was designed to be used for storing very large files such as virtual machines or datastores. They've also added a few key features that zfs does not have and are, at least on paper, very appealing.
The biggest change from 2012r2 is the introduction of storage spaces direct, which is basically Microsoft's version of VMware's vSAN and a move toward hyper-converged servers and software defined storage. Not really applicable for a single host home or lab, but some of the other upgrades are. One of the best improvements is with tiering. You can now have 3 tiers of storage: NVMe for caching + SSD for "hot" data + HDD for "cold" data. ZFS does caching with slog drives, but they don't do much to take advantage of mixing SSDs and HDD. You can also mix parity levels and tiers, e.g mirrored stripe for the ssd tier and parity for the hdd tier. That helps tremendously with write speeds. Another advantage over Freenas is the ability to easily add capacity. You can add disks to a zfs pool, but, crucially, zfs does nothing to re-balance the storage. For example, if you have 8 drives running mirrored pairs that start get full and you add another pair, because the other 8 drives were mostly full nearly all of the new writes will go to the new drives robbing you of the performance advantage you should get from using raid 10. Server 2016 will re-balance existing data across all drives. You can also empty out a drive prior to removing which could be useful.
The Server:
ESXi 6.0 u2
Intel CP2600 with Dual Xeon E5-2670
96GB RAM (20GB to FreeNAS)
The tests I did were all done from a server 2016 VM using iSCSI. The host is running esxi 6.0, freenas 9.10, windows server 2016. Freenas has an LSI-2008 HBA in IT mode passed through with 8 WD Re 3TB drives. It also has an Nytro F40 SSD as SLOG. Windows has 2x LSI-2008 HBA passed through. One has 8x Hitachi 7k3000 2TB drives, the other has 2x 960GB Toshiba HK3E2 SSD Drives. The HDD tier is parity, SSD tier is mirrored. In both cases I created a 8TB iSCSI volume and shared it back to ESXi. There is also a 9260-8i running Raid 5 with 4x 400GB Hitachi SSDs. (side note: I have to stay away from the great deals forum.)
EDIT: In order to mix tiers, you have to enable the Storage Spaces Direct which requires 3 hosts. So parity HDD + Mirrored SSD is not an option on a single host system.
For the file copy tests, I created a RAM disk to make sure that wouldn't be a bottleneck.
1500 MB to ZFS over iSCSI
5000MB to ZFS over iSCSI
File Copy to ZFS from RAM Disk
File Copy from ZFS to RAM Disk
5000 MB to ReFS over iSCSI
File Copy to ReFS from RAM Disk
File Copy from ReFS to RAM Disk
And for comparison sake, RAID 5 with 4x400GB Hitachi SSD
Here is parity only with no tiering
I'm not going to draw any conclusion other than to say that the results are decidedly mixed. I'm not quite ready to go all in with windows, but I'm not going to write it off either. I'm going to do more testing with tiers and see if Microsoft's parity raid is any better than it was in 2012r2 without the benefit of SSD caching and I'm also going to see if the 3rd NVMe tier does much in this use case.
Let me know if there are any other tests you'd like to see or questions about the setup.
Last edited: