Storage spaces advice (or move to ZFS) build for small office

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Hello,

We are migrating our production environment to Microsoft Azure. We have decided to buy one new file server to the local office for fast and safe file storage for business-documents, code-builds and other files. In space we only need 10TB+. Here i thought storage spaces and ReFs would be the perfect fit.

But after some testing i realize its not possible to use storage spaces tiering with ReFs! And Storage spaces direct requires minimum two nodes so i cannot use that either. There goes my idea with ReFs, some fast SSDs pooled with some HDDs to get the safety, capacity and the speed....

Im really confused as why its still not possible to build a simple single SAN which is both fast and safe using storage spaces + ReFs when its simple using ZFS.

A windows server would be much easier to integrate and maintain in the current windows environment than a ZFS based server (i.e. Freenas).

My thoughts/questions for the wise STH people :)
  • Should i consider staying on NTFS and tier some HDDs + SSDs for the file storage? It feels like a waste to store lots of archive/cold-data on an all SSD pool.
  • Is it safe to use consumer SSDs in storage spaces (i know in ZFS i need power caps for my ZIL)?
  • Do storage spaces still have performance problems with parity layouts like raid-5?

Thanks!
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
But after some testing i realize its not possible to use storage spaces tiering with ReFs!
:eek:
Unless I missed something tiereing was only possible with refs, not with ntfs. Also tiering on storage spaces is more like moving hot data to the faster tier at night/every 24hours and not like with zfs where all writes are cached by zil.

Is it safe to use consumer SSDs in storage spaces (i know in ZFS i need power caps for my ZIL)?
No, see this post on the microsoft blog: Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct

Do storage spaces still have performance problems with parity layouts like raid-5?
Maybe if there was an update between november 2016 and now that I missed and that magically improved the performance, parity spaces are still slower than alternatives (or windows software raid 5!)
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
:eek:
Unless I missed something tiereing was only possible with refs, not with ntfs. Also tiering on storage spaces is more like moving hot data to the faster tier at night/every 24hours and not like with zfs where all writes are cached by zil.


No, see this post on the microsoft blog: Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct


Maybe if there was an update between november 2016 and now that I missed and that magically improved the performance, parity spaces are still slower than alternatives (or windows software raid 5!)
Im looking at storage spaces and not storage spaces direct :)
Here is a link describing how storage spaces + refs cannot do tiering (tested it myself just now),
Set-FileStorageTier fails on Microsoft ReFS formatted volume.

Moving data around on a schedule task is OK. I just want to avoid an all-flash array since 95% of the data will most probably be cold data and it feels like a waste.

I have not (yet) found any advice on using consumer SSDs in a all SSD-array (for storage spaces). For cache devices its bad with consumer drives but since i cannot use cache devices maybe its OK to use consumer SSDs in a all flash pool?
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Not sure what your storage devices consist of but if you want to stick with Windows your options are a RoC with caching like an LSI card(or other OEM rebrand) with CacheCade, using Intel RST for a cache with an onboard SATA 6 Gbps controller(which is probably more of a thing on consumer grade hardware not enterprise stuff), or Storage Spaces configured with WBC. I'm thinking why not just do a RAID 10 with an SSD cache. This will require either a RoC like I said or an HBA /w some kind of Unix file system(ZFS or other).

Honestly I probably wouldn't want to manage CIFS permission issues/shares of a random ZFS box in a Windows dominant environment especially with Samba.
 
  • Like
Reactions: j_h_o

superfula

Member
Mar 8, 2016
88
14
8
Im looking at storage spaces and not storage spaces direct :)
Here is a link describing how storage spaces + refs cannot do tiering (tested it myself just now),
Set-FileStorageTier fails on Microsoft ReFS formatted volume.
It will do tiering but you can't 'pin' things to the cache tier, which is what your Powershell command does and what the author is trying to do in the post you linked.

Edit: The same author's post setting up tiers with ReFS
Microsoft Storage Spaces 2016: Storage Tiering NVMe + SSD Mirror + HDD Parity
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
I'm wondering why you would use the nvme drive for the os and not as the cache ?

Sent from my LG-H811 using Tapatalk
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
A windows server would be much easier to integrate and maintain in the current windows environment than a ZFS based server (i.e. Freenas).
Some infos
ZFS on a Oracle Solaris or free Solaris forks like OmniOS where ZFS is origin and native
with the SMB server from Sun instead the optional SAMBA offer nearly perfect Windows
(best in the Linux/Unix world) integration with

- AD support out of the box, zero config
- Windows SID support integrated into the Unix filesystem ZFS, zero config
This allows a backup/restore with ACL intact
- Windows NTFS alike ACL with inheritance, zero config
- Windows compatible SMB groups included (can contain groups in contrast to Linux groups)
- ZFS snaps as Windows previous versions, zero config

This and
- no weekly security updates with reboot
- much faster with a higher security level and more features than ReFS

btw
For an SMB filer you do not need, use or want sync write so an additional ZFS Slog as Zil is not required.
 
Last edited:

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Not sure what your storage devices consist of but if you want to stick with Windows your options are a RoC with caching like an LSI card(or other OEM rebrand) with CacheCade, using Intel RST for a cache with an onboard SATA 6 Gbps controller(which is probably more of a thing on consumer grade hardware not enterprise stuff), or Storage Spaces configured with WBC. I'm thinking why not just do a RAID 10 with an SSD cache. This will require either a RoC like I said or an HBA /w some kind of Unix file system(ZFS or other).

Honestly I probably wouldn't want to manage CIFS permission issues/shares of a random ZFS box in a Windows dominant environment especially with Samba.
In a Windows domain environment, I'd try Storage Spaces with WBC, or go the ROC route with an LSI card.
I have been playing with storage spaces WBC this morning. I can create a WBC using a storage pool where i mix HDD and SSDs. Microsoft recommends to use max 16GB as the WBC in a non-clustered environment (Using Storage Spaces for Storage Subsystem Performance - Windows 10 hardware dev).

Going this route i should get two small SSDs and mix them in a larger HDD pool. This might be the best possible solution currently when tiering does not work (see below).

I'd just do zfs and napp-it or freeNAS and enable cifs and call it a day.
Some infos
ZFS on a Oracle Solaris or free Solaris forks like OmniOS where ZFS is origin and native
with the SMB server from Sun instead the optional SAMBA offer nearly perfect Windows
(best in the Linux/Unix world) integration with

- AD support out of the box, zero config
- Windows SID support integrated into the Unix filesystem ZFS, zero config
This allows a backup/restore with ACL intact
- Windows NTFS alike ACL with inheritance, zero config
- Windows compatible SMB groups included (can contain groups in contrast to Linux groups)
- ZFS snaps as Windows previous versions, zero config

This and
- no weekly security updates with reboot
- much faster with a higher security level and more features than ReFS

btw
For an SMB filer you do not need, use or want sync write so an additional ZFS Slog as Zil is not required.
Id love to go the ZFS route but the AD im integrating towards is the Azure Active Directory Domain Services. This is not 100% real ad and i had some issues connecting freenas to it (maybe omnios works better but its still does not have as good integration as windows for AD).

Therefore im more or less limited to Windows if i can find a good-enough working solution :)

Gea good catch on the SMB filer, did not think about that it does not use sync writes.

I'm wondering why you would use the nvme drive for the os and not as the cache ?


Sent from my LG-H811 using Tapatalk
Was this posted in the correct thread?

It will do tiering but you can't 'pin' things to the cache tier, which is what your Powershell command does and what the author is trying to do in the post you linked.

Edit: The same author's post setting up tiers with ReFS
Microsoft Storage Spaces 2016: Storage Tiering NVMe + SSD Mirror + HDD Parity

I have done some more digging on this and actually win 2016 cannot use ReFs and tiering (it might look like you can but its false).

This can be verified via a tiered pool using ReFs as filesystem and trying to run,
Code:
PS F:\> Get-FileStorageTier -VolumeDriveLetter F | FT -AutoSize
Get-FileStorageTier : The specified volume does not support storage tiers.
or

Code:
PS F:\> Get-Volume -DriveLetter "F" | Optimize-Volume -TierOptimize
Optimize-Volume : The volume optimization operation requested is not supported by the hardware backing the volume.
Activity ID: {6eca351b-bbcb-4c5d-a883-8dd2ebcaf00e}
At line:1 char:31
+ Get-Volume -DriveLetter "F" | Optimize-Volume -TierOptimize
+                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (StorageWMI:ROOT/Microsoft/...age/MSFT_Volume) [Optimize-Volume], CimExcep
   tion
    + FullyQualifiedErrorId : StorageWMI 43022,Optimize-Volume
More info at MSDN confirming this issue,
Windows Server 2016 Storage Spaces Tier ReFS

So sadly its not possible. Feels like microsoft put all effort into storage spaces direct and none into ordinary storage spaces.

Im still trying to find any sources on storage spaces and consumer SSDs for simple SMB file storage. Anyone have any experience?