Adding nvme cache drive to existing storage pool

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Eric_on_Fire

New Member
Dec 18, 2019
9
7
3
I'm trying to add a P1600X Optane NVME drive to an existing storage pool that is composed of 10 8tb SAS drives. My intention is to use this as cache for tiered storage, but I'm struggling with it. Is this even possible, or would I have to create the storage pool from scratch? This is a stand-alone Windows Server 2022 installation.

dodeca-pools.png
 

Eric_on_Fire

New Member
Dec 18, 2019
9
7
3
For anyone else that tries this, I'm not positive, but I believe the reason I failed is because my 110gb Optane drives are apparently of insufficient size to be pooled. I tried setting up a tiered storage pool from scratch, and ran right into that barrier.
 

Eric_on_Fire

New Member
Dec 18, 2019
9
7
3
I believe that applies to Storage Spaces Direct, whereas I'm using Storage Spaces non-direct on a stand-alone home server. I wasn't able to find documentation, but CannotPoolReason indicates Insufficient Capacity. Could very easily be operator error!
1718225437119.png
 
  • Like
Reactions: gregsachs

brokenwindupdoll

New Member
Apr 20, 2019
4
8
3
I'm very, very late to the thread, but can at least provide some info.

Starting at the end, the likely reason you can't add the disk to the pool (the meaning behind 'CannotPoolReason : Insufficient Capacity") is that there isn't enough non-allocated space on the disk (ie, there's partitions on there carving it up). Initialize-Disk it, and it should pool up no problem.

And that leads to the next bit. I know you already found a different method, but for anyone coming later, here's what I've managed to learn about cache in Storage Spaces:

First up, almost all the info you'll find related to "cache disk" and Storage Spaces is only about Storage Spaces Direct (shortened to S2D). Cache disks there are more about data locality; since the data is spread across all the different hosts in the cluster, what you specifically ask for could be stored entirely on a different machine and take a network hop penalty to get to you. Thus putting really fast NVMe flash locally helps to alleviate the latency of random-access. The link about Storage Bus cache gregsachs linked is actually using S2D's cache mode on a single host; it still requires a single-host Windows Failover Cluster, which is as finicky as that sounds. Assuming you aren't trying to set up a business-scale VSAN, these aren't the droids you're looking for.

So, what you're likely trying to do is make you spinning disk parity volume faster. There's two ways to that:
  1. So-called Mirror Accelerated Parity Volume. This is true tiering where data either lives in the fast section, or the slow section. This is best if you're trying to make both reads and writes fast on humanly random file access, as it'll move hot data into the fast tier based on use. Instructions on making one are over here. I think you can convert an existing volume into a tiered one (but I've never tried, so don't do it to anything you care about).
  2. The intrinsic WriteCacheSize and ReadCacheSize of the parity volume itself. This is something that has very little direct info about, and starts digging into the bowels of how SS does it's parity writing. The quick version is that if the stripe size of the parity volume and file system allocation unit size don't line up, Storage Spaces does a two-step process when writing data: it first lands it in so-called cache on the pool as mirrored data; then, either after the cache is exhausted or the write is done, it spends the effort to do the parity calculation and stripe it across all the disks. There's a very nice writeup of this. The performance fix that link uses does work, but when I ran my large data disk like that, whenever Windows did it's built-in volume optimizations, it'd exhaust all of my RAM and grind everything to a snails pace (which is the exact opposite of making a NAS zippy!). It's been a while, and that might have been a fixed bug, but if you have some spare SSDs, you can put them to use instead. Those CacheSize properties are setable when creating the volume (but not after!), so you can actually crank them up. Add some faster SSDs to the pool, and it will use them for cache instead of the parity disks! I use this on my backup volume to make writes much faster.
I think in both cases, you'll have to recreate the volume to get what you want. If you're using thin provisioned volumes, this is just an annoyance of running a robocopy move operation between the old and new volume; if you're using fixed provisioning, you're SOL and need some other blob of storage to temporarily copy to. (Just use thin provisioning. It's way easier to live with and whatever performance penalty it might have from claiming and releasing slabs is so minor and swamped by all the usual disk drive related performance hangups anyway.)