Server 2016 vs FreeNAS ZFS for iSCSI Storage

Discussion in 'Windows Server, Hyper-V Virtualization' started by ColPanic, Sep 15, 2016.

  1. ColPanic

    ColPanic Member

    Joined:
    Feb 14, 2016
    Messages:
    116
    Likes Received:
    23
    I've been doing some testing with Windows Server 2016 (RTM) and FreeNAS and thought I would share some of my results. My use is purely a home/lab scenario and I'm not an expert on any of this and its quite possible that I did everything wrong, so take it for what its worth (not much).

    I've been using FreeNAS for storage on my home/lab server but have never really liked it. While there are undeniable benefits of the underlying ZFS file system and volume manager, the FreeNAS implementation has always seemed like a bit of a hack relying on the old opensource code from before Oracle closed it. There's also a cult-like mentality of many of its users and their forums are... well... cult-like and they thumb their noses at all of all this new fangled virtualization stuff (and tell you to buy more RAM). So I decided to give Windows 2016 a try and see how they stacked up. Windows new file system, ReFS, was new in 2012 but has had some time to mature. It fills in many of the features that zfs has had but ntfs lacked such as check summing and scrubbing and was designed to be used for storing very large files such as virtual machines or datastores. They've also added a few key features that zfs does not have and are, at least on paper, very appealing.

    The biggest change from 2012r2 is the introduction of storage spaces direct, which is basically Microsoft's version of VMware's vSAN and a move toward hyper-converged servers and software defined storage. Not really applicable for a single host home or lab, but some of the other upgrades are. One of the best improvements is with tiering. You can now have 3 tiers of storage: NVMe for caching + SSD for "hot" data + HDD for "cold" data. ZFS does caching with slog drives, but they don't do much to take advantage of mixing SSDs and HDD. You can also mix parity levels and tiers, e.g mirrored stripe for the ssd tier and parity for the hdd tier. That helps tremendously with write speeds. Another advantage over Freenas is the ability to easily add capacity. You can add disks to a zfs pool, but, crucially, zfs does nothing to re-balance the storage. For example, if you have 8 drives running mirrored pairs that start get full and you add another pair, because the other 8 drives were mostly full nearly all of the new writes will go to the new drives robbing you of the performance advantage you should get from using raid 10. Server 2016 will re-balance existing data across all drives. You can also empty out a drive prior to removing which could be useful.

    The Server:
    ESXi 6.0 u2
    Intel CP2600 with Dual Xeon E5-2670
    96GB RAM (20GB to FreeNAS)

    The tests I did were all done from a server 2016 VM using iSCSI. The host is running esxi 6.0, freenas 9.10, windows server 2016. Freenas has an LSI-2008 HBA in IT mode passed through with 8 WD Re 3TB drives. It also has an Nytro F40 SSD as SLOG. Windows has 2x LSI-2008 HBA passed through. One has 8x Hitachi 7k3000 2TB drives, the other has 2x 960GB Toshiba HK3E2 SSD Drives. The HDD tier is parity, SSD tier is mirrored. In both cases I created a 8TB iSCSI volume and shared it back to ESXi. There is also a 9260-8i running Raid 5 with 4x 400GB Hitachi SSDs. (side note: I have to stay away from the great deals forum.)

    EDIT: In order to mix tiers, you have to enable the Storage Spaces Direct which requires 3 hosts. So parity HDD + Mirrored SSD is not an option on a single host system.

    For the file copy tests, I created a RAM disk to make sure that wouldn't be a bottleneck.
    [​IMG]

    1500 MB to ZFS over iSCSI
    [​IMG]

    5000MB to ZFS over iSCSI
    [​IMG]

    File Copy to ZFS from RAM Disk
    [​IMG]

    File Copy from ZFS to RAM Disk
    [​IMG]

    5000 MB to ReFS over iSCSI
    [​IMG]

    File Copy to ReFS from RAM Disk
    [​IMG]

    File Copy from ReFS to RAM Disk
    [​IMG]

    And for comparison sake, RAID 5 with 4x400GB Hitachi SSD
    [​IMG]

    Here is parity only with no tiering
    [​IMG]

    I'm not going to draw any conclusion other than to say that the results are decidedly mixed. I'm not quite ready to go all in with windows, but I'm not going to write it off either. I'm going to do more testing with tiers and see if Microsoft's parity raid is any better than it was in 2012r2 without the benefit of SSD caching and I'm also going to see if the 3rd NVMe tier does much in this use case.

    Let me know if there are any other tests you'd like to see or questions about the setup.
     
    #1
    Last edited: Oct 18, 2016
    Bert, Hjalti Atlason, ridney and 4 others like this.
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,612
    Likes Received:
    4,565
    #2
  3. manxam

    manxam Active Member

    Joined:
    Jul 25, 2015
    Messages:
    226
    Likes Received:
    47
    Thanks for that @ColPanic, I'm really looking forward to your testing with parity and no-cache as it was virtually useless in 2012R2.
     
    #3
    Patrick likes this.
  4. RobertFontaine

    RobertFontaine Active Member

    Joined:
    Dec 17, 2015
    Messages:
    666
    Likes Received:
    148
    Silly question but is there still active development in the OpenZFS space? I remember the big hurrah when it kicked off a couple of years ago but I don't see much activity in github when I look at the logs.
     
    #4
  5. manxam

    manxam Active Member

    Joined:
    Jul 25, 2015
    Messages:
    226
    Likes Received:
    47
    There's still a lot of development by the Illumos team (illumos-gate) and they upstream regularly:

    Aug 16 to Sept 16:
    Excluding merges, 28 authors have pushed 54 commits to master and 54 commits to all branches. On master, 483 files have changed and there have been 7,488 additions and 94,658 deletions.
     
    #5
  6. Larson

    Larson Member

    Joined:
    Nov 10, 2015
    Messages:
    96
    Likes Received:
    7
    Thanks @ColPanic! Very helpful since I've been going back an forth between these two platforms myself for my own home setup.
     
    #6
  7. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,287
    Likes Received:
    758
    OpenZFS and storage around is very active despite the break with Oracle,
    see Companies - OpenZFS

    Not to forget the push of development to Open-ZFS due ZoL where ZFS and not btrfs seems to become the defacto next gen filesystem or Samsung who bought Joyent recently, one of the key developers behind Illumos, the free Solaris fork.
     
    #7
    Last edited: Sep 19, 2016
  8. BackupProphet

    BackupProphet Well-Known Member

    Joined:
    Jul 2, 2014
    Messages:
    788
    Likes Received:
    279
    #8
  9. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    2,699
    Likes Received:
    500
    Love these kinds of write-ups. Curious, was LZ4 enabled on the ZFS pool/volume?
     
    #9
  10. ColPanic

    ColPanic Member

    Joined:
    Feb 14, 2016
    Messages:
    116
    Likes Received:
    23
    Yes, LZ4 was enabled.
     
    #10
  11. ColPanic

    ColPanic Member

    Joined:
    Feb 14, 2016
    Messages:
    116
    Likes Received:
    23
    Correct me if I'm wrong, but there hasn't been much development in the places where the freenas implementation is lacking. Specifically dedupe, encryption, expanding and rebalancing storage and taking advantage of cheap ssd drives.
     
    #11
  12. wildchild

    wildchild Active Member

    Joined:
    Feb 4, 2014
    Messages:
    394
    Likes Received:
    57
    Dedupe,encrytion and expansion is standard part of zfs, though since tigile was bought bought by oracle development has been slowed down.
    Please do not make the mistake to think zfs is *bsd only.
    There are also many developments on the illumos ( formerly known as opensolaris ) side of things
    Cheap ssd are supported although not advised on zil.
    Trim is by default av. On freebsd
     
    #12
  13. ridney

    ridney Member

    Joined:
    Dec 8, 2015
    Messages:
    77
    Likes Received:
    33
    Thanks ColPanic. Looking forward to your NVMe + SSD + Parity HDDs tests.
     
    #13
  14. manxam

    manxam Active Member

    Joined:
    Jul 25, 2015
    Messages:
    226
    Likes Received:
    47
    @ColPanic, do you happen to have any idea when the second part of your testing will take place? I'm extremely curious about parity without tier-ing.
     
    #14
  15. Marshall Simmons

    Joined:
    Feb 18, 2015
    Messages:
    117
    Likes Received:
    21
    @ColPanic, would you be able to give an explanation on how you setup mixed parity levels in storage spaces?
     
    #15
  16. ColPanic

    ColPanic Member

    Joined:
    Feb 14, 2016
    Messages:
    116
    Likes Received:
    23
    I'll try and get to it in the next week or so.
     
    #16
  17. ColPanic

    ColPanic Member

    Joined:
    Feb 14, 2016
    Messages:
    116
    Likes Received:
    23
    You just add all the disks (SSD and HDD) to a new pool, then when you create the virtual volume you set the size and parity level for each tier. It's pretty obvious in the wizard.

    You can also do it all with powershell if that's your thing.
     
    #17
  18. dswartz

    dswartz Active Member

    Joined:
    Jul 14, 2011
    Messages:
    379
    Likes Received:
    28
    Interesting article. One nit though: slog has nothing to do with caching - it is a transaction log.
     
    #18
  19. ColPanic

    ColPanic Member

    Joined:
    Feb 14, 2016
    Messages:
    116
    Likes Received:
    23
    It looks like mixing parity levels (e.g. Mirrored for ssd tier and parity for the HDD tier) can only be used if you enable S2D. And to enable S2D you have to have to have 3 nodes in a cluster. There may be hacks to get around it but out of the box, this configuration is not supported. You also have to have the datacenter version of Windows which makes the whole thing cost prohibitive unless you have access to free server licenses.

    I have no idea why they've limited it like this It's not engineering - the software can clearly do it. With the explosion of cheap SSDs this could be a great storage option for SOHO or SMB users looking for something more than freenas but don't need HA.

    Back to freenas and iscsi for now.
     
    #19
  20. manxam

    manxam Active Member

    Joined:
    Jul 25, 2015
    Messages:
    226
    Likes Received:
    47
    Thanks for the update ColPanic. I'm a little confused about your statement though as you said this in your original post:
    Yet above it appears you're saying that this cannot be done? Did you have a chance to measure a PARITY only HDD pool of any size for speed comparison to a ZFS pool w/o L2ARC or SLOG?

    Thanks!
     
    #20
Similar Threads: Server 2016
Forum Title Date
Windows Server, Hyper-V Virtualization Windows Server 2016 Oem Activation error Dec 13, 2019
Windows Server, Hyper-V Virtualization Server 2016 ISO Apr 24, 2019
Windows Server, Hyper-V Virtualization use server 2016 with 4 NIC for routing Jan 9, 2019
Windows Server, Hyper-V Virtualization Windows Server 2016 - Samsung 970 Pro SSD Jan 9, 2019
Windows Server, Hyper-V Virtualization Is WSUS broken on Windows 10 / Server 2016? Oct 2, 2018

Share This Page