Server 2016 vs FreeNAS ZFS for iSCSI Storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Here is parity only with 6x2tb 7k3000 drives. I thought the HDD tier in that test was parity but it must not have been (unless this feature was removed in a recent update). I went through a lot of different drive configurations.


 
Last edited:

manxam

Active Member
Jul 25, 2015
234
50
28
Thanks @ColPanic, so it's the exact same issue as 2012 R2 where writes were horrendous on parity w/o SSD tier.
So much for that...
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
You just add all the disks (SSD and HDD) to a new pool, then when you create the virtual volume you set the size and parity level for each tier. It's pretty obvious in the wizard.

You can also do it all with powershell if that's your thing.
THat's what I thought as well, however, when I did it through the GUI - I didn't get any of the options for how to use the tiers. Just the standard 2012r2 version. set max amount used for each tier.
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
It looks like mixing parity levels (e.g. Mirrored for ssd tier and parity for the HDD tier) can only be used if you enable S2D. And to enable S2D you have to have to have 3 nodes in a cluster. There may be hacks to get around it but out of the box, this configuration is not supported. You also have to have the datacenter version of Windows which makes the whole thing cost prohibitive unless you have access to free server licenses.

I have no idea why they've limited it like this It's not engineering - the software can clearly do it. With the explosion of cheap SSDs this could be a great storage option for SOHO or SMB users looking for something more than freenas but don't need HA.

Back to freenas and iscsi for now.
That just answered my question. Damn Microsoft
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Since the ability to do mirrored+parity tiers is clearly present in the software and I could have sworn it was there in an earlier build before GA (I even remember what the dialog looked like) I bet some workarounds will come along to to unlock this feature.

It may be as simple as creating the virtual disk using powershell instead of the wizard but I haven't tried and don't want to destroy the working volumes I have. I may setup a VM with a bunch of vhds to test.

I have it running mirrored on both the HDD and SSD tier and the performance is comperable to freenas but with some added features (easier expansion, rebalancing, management via windows server manager). I'm still using freenas too but eventually I want to get away from it.

My next storage project is testing big all-SSD arrays. I'm collecting cheap 960GB SSDs and plan to try various configurations with them.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
It looks like mixing parity levels (e.g. Mirrored for ssd tier and parity for the HDD tier) can only be used if you enable S2D. And to enable S2D you have to have to have 3 nodes in a cluster. There may be hacks to get around it but out of the box, this configuration is not supported. You also have to have the datacenter version of Windows which makes the whole thing cost prohibitive unless you have access to free server licenses.

I have no idea why they've limited it like this It's not engineering - the software can clearly do it. With the explosion of cheap SSDs this could be a great storage option for SOHO or SMB users looking for something more than freenas but don't need HA.

Back to freenas and iscsi for now.
This is not true, I have had mix parity since tp4 and it works great.

Sent from my SM-N920T using Tapatalk
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
To clarify you need to use PowerShell but Claus Jorgensen's blog give the commands and technet articles.

Sent from my SM-N920T using Tapatalk
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
When I get a chance I will document a walk through.

Sent from my SM-N920T using Tapatalk
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
What you can do with a single host is now have a new tiered setup with different column counts in each tier.

You can have 2 SSD mirrored with a column count of 1 and 8 HD mirrored with a column count of 4. MRV unfortunately is only available in S2D.

Chris
 

Jab.R

New Member
Apr 16, 2016
16
1
3
59
Cincinnati OH
Or use Ubuntu which as of 16.04 has built in ZFS -- I've migrated drives from opendiana a few years ago -- very stable


Sent from my iPhone using Tapatalk
 

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
Ubuntu would have the same limitations as freenas. It doesn't do anything to take advantage of mixing SSDs and HDDs and if you use RAIDZ you get the random I/O performance of a single disk which is not ideal for VM storage. ZFS is great for some things, but the fact that its based on opensource code from the days before SSDs were cheap is starting to show.
 

J--

Active Member
Aug 13, 2016
199
52
28
41
ZFS does caching with slog drives, but they don't do much to take advantage of mixing SSDs and HDD. You can also mix parity levels and tiers, e.g mirrored stripe for the ssd tier and parity for the hdd tier.

Am I missing something? ZFS has great benefits from caching in both ARC and L2ARC. You can use L2ARC with NVMe, SSD, or whatever floats your boat. I'm not sure why you say this is unique to ReFS? Perhaps it doesn't have the third tier "cold" storage, but I'm thinking you could solve this pretty quickly w/ a symbolic link?

I agree with you about the balancing (and tried very hard to implement btrfs to get balancing functionality, yet it couldn't match zfs performance), but I don't foresee myself going above an 8-disk array for homelab (for power consumption and disk vibration reasons). If a drive does fail, I'll just replace it with a larger mirrored pair of drives and grow the array slowly.

I also agree that the FreeNAS guys (mainly a few moderators, but end up driving the "personality" of the forum) have big hubris problems and really shouldn't be the face of the platform.


Anyways, the biggest hurdle for me is the license cost for Windows Server in a homelab. I draw the line when the base software costs more than the hardware.
 
Last edited:

ColPanic

Member
Feb 14, 2016
130
23
18
ATX
I like zfs and after testing WS 2016 I stuck with zfs, but there are areas where freenas has fallen behind when it comes to utilizing SSDs.

It does a type of caching with slog and you can add L2ARC as a substitute for RAM. I use both of these. But the L2ARC is only utilized when data is flushed from RAM so it's not really a way to add SSD capacity and slog drives just need to be small, low latency drives. It doesn't do any good to use big drives. This all goes back to the fact that the underlying zfs code was written for HDD arrays and, imho, it is still the best implementation of software raid for HDDs. It just does nothing to utilize big cheap SSDs.

What I want to do is use a bunch of low cost SSDs (I've been picking up 960 GB drives whenever I find them below $200) and use them along side HDDs. You can of course create multiple zpools for SSDs and HDDs but that's not ideal. Freenas isn't setup to mix media types.

The other places it's fallen behind is rebalancing loads, online raid level migration and easily adding disks to existing zvols. Deduplication is also basically unusable. I suspect these features and more are present in the current, oracle owned, implementation of zfs but we are at the mercy of a few developers working with 10 year old tech. I hoped that MS with their army of developers and 4 years of working on ReFS would be the better choice, and on paper it does those things, but in my testing the real world performance is not there yet.

I should also note that I'm only looking at iSCSI storage for VMware.
 

Bert

Well-Known Member
Mar 31, 2018
822
383
63
45
I am also using storage spaces and found the rebalancing, disk eviction very useful. I also found that if a disk is lost in stripped mode, you can still access to the filesystem and recover the data on healthy drives.

@ColPanic, what is your usage scenario for this file server? Is it home media server or some other purpose? I am curious why you want to do tiering and take advantage of SSD's in the first place.
 

nk215

Active Member
Oct 6, 2015
412
143
43
49
I've been moving away from *nix based NAS to windows server 2016 for file server. When I have more than 50,000 small files (0.1 - 2megs files) in a single directory, *unix based NAS units just slow way down.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I've been moving away from *nix based NAS to windows server 2016 for file server. When I have more than 50,000 small files (0.1 - 2megs files) in a single directory, *unix based NAS units just slow way down.
"NAS units slow way down" -- Going to need some clarification on what exactly is going on here.

- Is it slow to show all files in that directory when browsing over network?
- Does the entire OS slow or the management GUI only?

I've never had that problem and I used to run *nix based image hosting services where I would run out of the ability to create new files within a directory. (I believe due to inode limitation/max reached for FS.)

I've also run into Windows timing out when trying to list directory with 5-10,000 files too that was on SSD and NVME too. Also have seen this when browsing network shares from within windows no matter what the 'file host' OS was too.
 
  • Like
Reactions: gigatexal