Guide on SS (not S2D) PS commands for mirror accelerated parity?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcl333

Active Member
May 28, 2011
253
74
28
Hello all,

I find pieces of this information all over the place, but not something really comprehensive with all the little gotchas, I suppose if I can't find it I will make one.

I have 2x NVMe eSSDs w/PLP and 7x 10TB shucked SATA drives, and there appears to be allot of different commands needed to get this exactly right. I am using Server 2019 datacenter.

Thanks

-JCL
 

MBerthe

New Member
Feb 28, 2019
5
5
3
France
Hi,
First, you need to define what you want to do with Storage Spaces.
What is your goal? Capacity, performance, etc.
If you don't have setup your system, can you give us the return of this command (Powershell):
Get-PhysicalDisk | sort-object PhysicalLocation | select SlotNumber, FriendlyName, Manufacturer, Model, mediatype, FirmwareVersion, BusType, PhysicalSectorSize, LogicalSectorSize, PhysicalLocation | ft

For Storage Spaces (not the Direct), the MS documentation is not updated for 2019... but this is the same for 2012R2, 2016 and 2019 :
Storage Spaces overview
Have a nice day!
 

jcl333

Active Member
May 28, 2011
253
74
28
OK, let me share with you guys the same thing I put up on the Azure Stack HCI Slack group that I just joined:

Here is my current hardware list that this references:
list of hardware 3-26-2020.pdf

My original idea was a primary server and a secondary server to back-up the first, to setup the “3-2-1 rule” for backing up data, with the third copy probably being in the cloud like Backblaze or something or maybe at my parents house. However, as I gathered hardware on Amazon, e-bay, and the junk pile at work, I realized that I could possibly do more with it. I work as a server system engineer running a datacenter, mostly VMware but I have extensive background in Windows and Hyper-V and networking as well.I have been doing allot of research into ReFS and ZFS file systems for high-reliability data storage, so I started researching/playing with Storage Spaces.

I started getting frustrated because much of the documentation talks about S2D and it is becoming hard to figure out which system can do what I want. I do not necessarily need a S2D cluster in my home, but I am interested in HCI, replication, de-duplication/compression, and related technologies. It also looks like I might possibly be able to use storage replica to copy the data to the second server (or 3rd?), I am wondering if it can do so without re-hydrating, that would be nice, I could also potentially play with RDMA/iWARP.

I am willing to set up S2D if that is what I have to do and if I cannot get what I need from regular SS, but I do not want to use too much electricity and at some point, my wife will get irritated with what all this costs. As for data, I have around 20TB of family photos, videos, BD/DVD ISO’s, and such. Going to get more into smart home, Plex, and all that sort of stuff over time, so it will be nice to have a small infrastructure of servers to support it. So this is not technically a homelab per-se, but I could certainly use a portion of it for that.This is where I currently stand:

- Researching and gathering all the needed hardware (see attached list above)
- Researching the commands and process for creating a mirror accelerated parity array
- Researching the associated hardware needed to support this configuration properly, as if it were production
- Use the 7x 10TB drives for dual-parity and some combination of the SSDs for cache
- Looks like there are allot of rules for the number and size of cache drives vs. data drives
- This is especially where I run into SS vs. S2D issues
- I am starting to make a list and compile all the requirements
- Finding all the little gotchas that I could encounter and trying to solve them
- Best way to replicate, copy, or backup the data from primary server to the secondary
- Currently looking into what it would take to actually setup proper S2D
- As you can see from my hardware list, I got lucky and ended up with a fair bit of some good stuff
- Researching the network cards and switches needed for RDMA/RoCE/iWARP, have not bought anything yet
- See if it would be possible to just do two nodes with x-over cables and not need a switch

I am actually going to go ahead and buy a 2nd X10SRH-CF motherboard since I already have a CPU and RAM for it, still debating if I will buy a 2nd 16-slot Supermicro chassis, maybe. Pretty sure these support the SES-2 enclosure management / awareness that S2D likes to see.

-JCL
 

jcl333

Active Member
May 28, 2011
253
74
28
Hi,
First, you need to define what you want to do with Storage Spaces.
What is your goal? Capacity, performance, etc.
If you don't have setup your system, can you give us the return of this command (Powershell):
Get-PhysicalDisk | sort-object PhysicalLocation | select SlotNumber, FriendlyName, Manufacturer, Model, mediatype, FirmwareVersion, BusType, PhysicalSectorSize, LogicalSectorSize, PhysicalLocation | ft

For Storage Spaces (not the Direct), the MS documentation is not updated for 2019... but this is the same for 2012R2, 2016 and 2019 :
Storage Spaces overview
Have a nice day!
Thank you for the reply.

Good points, take a look at my more detailed post and see what you think.

I could run that command on the machine I am currently testing with (which is server 2 in my post) and then later on server 1. But also I think allot of the details you are looking for are in the list attached to the post.

I am glad someone else pointed out that the docs are not updated for 2019 ;-)

-JCL
 

jcl333

Active Member
May 28, 2011
253
74
28
Yeah, if you start reading through all the guides, blogs, and other things, there are some small and some big gotchas. It's kind of like ZFS, the huge caveat that you can't add disks to a disk group (vdev). Not a deal breaker for sure, but an important design consideration.

SS/S2D has it's own little quirks:
- enormously inefficient at drive use (min 7 disks for "raid-6"-ish mode), mirrors almost as bad
- some really important hardware requirements, some of which can be tough, but similar to other HCI
- all the cool stuff is in powershell, but that is OK because I am trying to become more comfortable with it

But, lots of little advantages too.

-JCL
 

jcl333

Active Member
May 28, 2011
253
74
28
There is a way to do this.

Tiered Storage Spaces Windows 10
Storage spaces on Windows 10 and tiering

The commands should also work on server 2019.

Chris
Thanks, this is actually exactly the type of thing I am looking for, nice examples.

A little humorous of an example, I would never do this with consumer hardware, but the commands are still good.

If I add to this things for turning on CRC, compression and deduplication, etc., then I could create a sort of recipe.

-JCL
 

Net-Runner

Member
Feb 25, 2016
81
22
8
40
I would not use S2D for what you are trying to build because of multiple reasons I will try to describe.
First of all, a small 2-node setup is very limited in options and features if talking Storage Spaces Direct. To feel the full power of this technology, you have to use at least four nodes. The storage efficiency in that kind of setup will also be not that great Storage Spaces Direct Calculator.

S2D runs great only on ready-nodes provided by Microsoft's whatever partners. Everything else might very well fail. Using unsupported hardware for S2D will cause problems for sure. Mirror-accelerated parity is an essential part of S2D + ReFS and is not supported in regular Storage Spaces Mirror-accelerated parity.

I would recommend you create regular standalone storage spaces with automated tiering Deploy Storage Spaces on a stand-alone server using NTFS. Microsoft documentation is crappy as hell; thus, I am not sure if SSD mirror over HDD parity is supported and will work, but mirror over mirror will do. Storage efficiency will suck in this case of course :(

I, personally, hate any types of software RAIDs provided by Microsoft. I have seen too much shit. The latest thing I would do is giving my family photos and videos to any flavor of Storage Spaces. Add ReFS on top, and you have a double no go. You can just delete the data right away to save time and nerves.

Get some "good old" hardware RAID controllers by LSI, create OBR10 in each of the servers, maybe use CacheCade or Intel OpenCAS Open Cache Acceleration Software | Open CAS to speed things up with flash and go the proven and way more reliable way.

To replicate data between the servers, Hyper-V replica is another excellent choice of getting regular headaches. I would strongly suggest using Veeam Backup and Replication for this purpose https://www.veeam.com/vm-backup-recovery-replication-software.html. The community edition covers up to 10 virtual machines and is entirely free to use. It will excellently handle both replication and backups.

If you need block-level mirroring like S2D instead of replication, I would probably look towards Starwinds instead https://www.starwindsoftware.com/starwind-virtual-san-free that has a free version managed using PowerShell which conjoins with what you are trying to achieve. It also supports conventional iSER/RDMA/ROCE/NVMeoF technologies that can be used by other major operating systems you might be interested in running as guests instead of proprietary SMBdirect stuff.

I would also look on eBay for used Mellanox ConnectX 10 Gbe network cards. Get a pair of dual-port babies and connect the hosts directly. 10 Gbe switch will cost you a fortune and isn't really necessary for a small two nodes setup at home.

Sorry for being a bit stubborn, maybe. I am just sincerely trying to help.
 

jcl333

Active Member
May 28, 2011
253
74
28
First, let me say thank you for taking the time to make such a detailed response.

I would not use S2D for what you are trying to build because of multiple reasons I will try to describe.
First of all, a small 2-node setup is very limited in options and features if talking Storage Spaces Direct. To feel the full power of this technology, you have to use at least four nodes. The storage efficiency in that kind of setup will also be not that great Storage Spaces Direct Calculator.
I won't disagree with this. Most HCI platforms lean heavily on nodes rather than any kind of redundancy within a node, I think it is a bad design choice personally, especially when you start adding nodes just to get more storage. I have looked at Nutanix, VMWare vSAN, Simplivity, and S2D. I think Simplivity is by far the best, but it is basically HPe only, although apparently they are planning to release a software-only version that I could potentially use in a home lab without needing the proprietary accelerator card.

For S2D, basically it is "accessible" because I don't have to worry as much about licensing, seems friendly to varying hardware, and has de-duplication and compression without requiring completely insane amounts of memory. Other than that I am much more comfortable with Windows than I am Linux, BSD, etc.

But, it is also fair to say I am trying to work on file services, and don't necessarily need HCI, so it is a bit of a stretch.


S2D runs great only on ready-nodes provided by Microsoft's whatever partners. Everything else might very well fail. Using unsupported hardware for S2D will cause problems for sure. Mirror-accelerated parity is an essential part of S2D + ReFS and is not supported in regular Storage Spaces Mirror-accelerated parity.
There does appear to be some obstacles in this regard, especially in getting S2D to recognize that you have SSDs with PLP. I think I am pretty close on supported hardware, but it is starting to look like I might need a different solution.

I would recommend you create regular standalone storage spaces with automated tiering Deploy Storage Spaces on a stand-alone server using NTFS. Microsoft documentation is crappy as hell; thus, I am not sure if SSD mirror over HDD parity is supported and will work, but mirror over mirror will do. Storage efficiency will suck in this case of course :(
So far I have not seen anything that specifically says that mirror-accelerated parity will not work with regular SS vs. S2D. Yes, storage efficiency with mirroring is embarrassingly bad. And I still don't understand why mirror-accelerated-parity is so slow (although they improved it recently). I understand their explanation why it is, but I don't accept it, commodity processors are much faster than RAID ASICs.

I, personally, hate any types of software RAIDs provided by Microsoft. I have seen too much shit. The latest thing I would do is giving my family photos and videos to any flavor of Storage Spaces. Add ReFS on top, and you have a double no go. You can just delete the data right away to save time and nerves.
I think it is possible you are conflating software RAID with modern software defined storage, they are not the same. Things like the RAID you can do in storage manager since the days of Windows NT, or with most Intel chipsets, yes, those are bad and I think why software RAID has a bad reputation. There are good implementations of software RAID, such as those used in QNAP and Synology (yes, those are not Microsoft of course).

I look at Storage Spaces as an evolution of things like drive extender in WHS. I think with software defined storage adding things like de-duplication, compression, and parity checking, it definitely has potential. But to be fair, since I would not trust anything without having a backup (or two) then I am willing to give it a chance.

What do you have against ReFS? They have been working on it for many years, it looks to me like they have finally sorted the issues out. They do fully support it in production at an enterprise level.


Get some "good old" hardware RAID controllers by LSI, create OBR10 in each of the servers, maybe use CacheCade or Intel OpenCAS Open Cache Acceleration Software | Open CAS to speed things up with flash and go the proven and way more reliable way.
Really? I will admit, I would love to go back to the good old days, get myself a nice RAID card from LSI, Areca, or Adaptec (or whoever owns these today), and be done with it. But, those cards are still costly, and have well known issues such as the RAID5 write hole or silent data corruption, I really don't think they are appropriate with many TB's of storage we have today. I think you would just be trading one set of issues for another.

Before I did that, I think I would just go with TrueNAS and ZFS... which again, are software defined. Not perfect of course, it is a little difficult to expand storage, and deduplication requires insane RAM. But, it is on the short list I think of the most reliable ways to store data.


To replicate data between the servers, Hyper-V replica is another excellent choice of getting regular headaches. I would strongly suggest using Veeam Backup and Replication for this purpose Backup software for virtual, physical and cloud - Veeam Backup & Replication. The community edition covers up to 10 virtual machines and is entirely free to use. It will excellently handle both replication and backups.
I have not used Hyper-V replica, but I know VMware SRM/replication really sucks. Note that, I likely could not use a solution that just replicates VMs, because the storage technology I am looking at requires direct access to hardware to work well, so in a VM I would have to pass the controller through to it, and I think that would break most VM replication methods that depend on using virtual disks.

Have you looked at Microsoft's storage replica? It looks really good to me, basically similar to some of the SAN replication methods used by EMC and others.

Veeam is a good product for sure, that is definitely on my list. I could see doing storage spaces on a pair of servers, and just using one to backup the other and that's it. Not sure if the community edition can do that. But also regular NT Backup is actually much better than is should be.


If you need block-level mirroring like S2D instead of replication, I would probably look towards Starwinds instead Software Defined Storage for the HCI • StarWind Virtual SAN ® Free that has a free version managed using PowerShell which conjoins with what you are trying to achieve. It also supports conventional iSER/RDMA/ROCE/NVMeoF technologies that can be used by other major operating systems you might be interested in running as guests instead of proprietary SMBdirect stuff.
I have looked at the Starwind stuff, it is tempting and I hear good things about it.

Is SMB direct really that bad?


I would also look on eBay for used Mellanox ConnectX 10 Gbe network cards. Get a pair of dual-port babies and connect the hosts directly. 10 Gbe switch will cost you a fortune and isn't really necessary for a small two nodes setup at home.
I already have some Intel 10Gig cards that are nice and will use those unless my research finds something with RDMA that really looks good (and there are some prospects). I am also looking at going up to 25Gig or 40Gig.

Sorry for being a bit stubborn, maybe. I am just sincerely trying to help.
No, it's fine, you raise some good points I think. It is definitely giving me some things to consider.

I am actually starting to get worn out doing research on the best solution and want to just get it done and move on.

-JCL
 
  • Like
Reactions: Net-Runner

homeadm

New Member
Apr 18, 2023
4
0
1
Hello everyone!

I would like to create mirror accelerated parity SS (not S2D) under Server 2016, from a bunch of HDDs (no SSDs). I'm evaluating this configuration under VM. It looks very strange:

Code:
PS C:\Users\Administrator> get-storagetier -friendlyname r1


ObjectId               : {1}\\WIN-AOK5B3SC5UH\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="
                         {c6aa75eb-ddba-11ed-9c06-806e6f6e6963}:ST:{d487633d-280a-43fc-9d65-e5ae301007c5}{cc503513-883f
                         -4caa-9980-e46b43a0b3ba}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {cc503513-883f-4caa-9980-e46b43a0b3ba}
AllocatedSize          : 0
AllocationUnitSize     : Auto
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 0
FriendlyName           : r1
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : Auto
NumberOfDataCopies     : 2
NumberOfGroups         : 1
ParityLayout           :
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Mirror
Size                   : 0
Usage                  : Data
PSComputerName         :



PS C:\Users\Administrator> get-storagetier -friendlyname r5


ObjectId               : {1}\\WIN-AOK5B3SC5UH\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="
                         {c6aa75eb-ddba-11ed-9c06-806e6f6e6963}:ST:{d487633d-280a-43fc-9d65-e5ae301007c5}{76faba26-d469
                         -40b0-bc87-c91d15ea841a}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {76faba26-d469-40b0-bc87-c91d15ea841a}
AllocatedSize          : 0
AllocationUnitSize     : Auto
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 0
FriendlyName           : r5
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : Auto
NumberOfDataCopies     : 1
NumberOfGroups         : Auto
ParityLayout           : Rotated Parity
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Parity
Size                   : 0
Usage                  : Data
PSComputerName         :



PS C:\Users\Administrator> get-storagetiersupportedsize

cmdlet Get-StorageTierSupportedSize at command pipeline position 1
Supply values for the following parameters:
FriendlyName[0]: r1
FriendlyName[1]: r5
FriendlyName[2]:

SupportedSizes TierSizeMin TierSizeMax TierSizeDivisor
-------------- ----------- ----------- ---------------
{}                       0           0               0
{}                       0           0               0

Sizes of tiers are 0, while it should be 10 & 74 GB respectively. Various other properties are also weird:


mac.jpg

Is this normal? How can I diagnose mirror accelerated parity? With tiered SS (SSD+HDD) it is done by defrag, but it doesn't work here.
 

Jorge Perez

Active Member
Dec 8, 2019
104
44
28
Hello everyone!

I would like to create mirror accelerated parity SS (not S2D) under Server 2016, from a bunch of HDDs (no SSDs). I'm evaluating this configuration under VM. It looks very strange:

Code:
PS C:\Users\Administrator> get-storagetier -friendlyname r1


ObjectId               : {1}\\WIN-AOK5B3SC5UH\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="
                         {c6aa75eb-ddba-11ed-9c06-806e6f6e6963}:ST:{d487633d-280a-43fc-9d65-e5ae301007c5}{cc503513-883f
                         -4caa-9980-e46b43a0b3ba}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {cc503513-883f-4caa-9980-e46b43a0b3ba}
AllocatedSize          : 0
AllocationUnitSize     : Auto
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 0
FriendlyName           : r1
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : Auto
NumberOfDataCopies     : 2
NumberOfGroups         : 1
ParityLayout           :
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Mirror
Size                   : 0
Usage                  : Data
PSComputerName         :



PS C:\Users\Administrator> get-storagetier -friendlyname r5


ObjectId               : {1}\\WIN-AOK5B3SC5UH\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="
                         {c6aa75eb-ddba-11ed-9c06-806e6f6e6963}:ST:{d487633d-280a-43fc-9d65-e5ae301007c5}{76faba26-d469
                         -40b0-bc87-c91d15ea841a}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {76faba26-d469-40b0-bc87-c91d15ea841a}
AllocatedSize          : 0
AllocationUnitSize     : Auto
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 0
FriendlyName           : r5
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : Auto
NumberOfDataCopies     : 1
NumberOfGroups         : Auto
ParityLayout           : Rotated Parity
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Parity
Size                   : 0
Usage                  : Data
PSComputerName         :



PS C:\Users\Administrator> get-storagetiersupportedsize

cmdlet Get-StorageTierSupportedSize at command pipeline position 1
Supply values for the following parameters:
FriendlyName[0]: r1
FriendlyName[1]: r5
FriendlyName[2]:

SupportedSizes TierSizeMin TierSizeMax TierSizeDivisor
-------------- ----------- ----------- ---------------
{}                       0           0               0
{}                       0           0               0

Sizes of tiers are 0, while it should be 10 & 74 GB respectively. Various other properties are also weird:


View attachment 28420

Is this normal? How can I diagnose mirror accelerated parity? With tiered SS (SSD+HDD) it is done by defrag, but it doesn't work here.
Did you already create the volume?
 

homeadm

New Member
Apr 18, 2023
4
0
1
No, but I have to do this work this week - I ran out of space on my old array. I have hard drives mounted and wired. I'm still scratching my head on exactly what commands I should use to create mirror-accelerated-parity S.Space, and how to diagnose if it's working as expected.

Since most of activity will be reading, I'm considering allocating 3-5% of the space to mirror tier. Is it OK? I have 14 TiB of raw space.
 

homeadm

New Member
Apr 18, 2023
4
0
1
I started up my "production" SS in mirror-accelerated-parity mode. I am evaluating it since yesterday. So far, so good. Performance indicates that all writes land to mirror section, then are moved in background to parity. GUI doesn't seem to support this animal, hence the strange information.
 

homeadm

New Member
Apr 18, 2023
4
0
1
No script, it's just 3 commands. Since my tests indicated, that in multiple drive configuration, mirror is much slower on reads than parity, I decided to dedicate only 200 GB to mirror, and remaining 10+ TB to parity. Here are my commands:

Code:
New-Storagetier -StoragePoolFriendlyname "Storage Space 1" -FriendlyName Mirror -MediaType HDD -ResiliencysettingName Mirror
New-Storagetier -StoragePoolFriendlyname "Storage Space 1" -FriendlyName Parity -MediaType HDD -ResiliencysettingName Parity -NumberOfColumns 5
New-Volume -FriendlyName MirAccPar -FileSYstem ReFS -StoragePoolFriendlyName "Storage Space 1" -StorageTierFriendlyNames Mirror, Parity -StorageTierSizes 200GB, 10644GB
Of course you must create your storage pool first.

It looks like "tiers" that I created are some abstract objects, used only to create "real tiers" named [volume name]_[tier name]:

Code:
PS C:\Users\Administrator> get-storagetier


ObjectId               : {1}\\UTS\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="{90e94a88-46
                         96-11ee-aa2e-806e6f6e6963}:ST:{3cecad52-10e8-4c5a-9caa-66dee2c0b9ec}{24b3380a-c462-4723-a491-0
                         92fe75468b8}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {24b3380a-c462-4723-a491-092fe75468b8}
AllocatedSize          : 0
AllocationUnitSize     : Auto
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 0
FriendlyName           : Mirror
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : Auto
NumberOfDataCopies     : 2
NumberOfGroups         : 1
ParityLayout           :
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Mirror
Size                   : 0
Usage                  : Data
PSComputerName         :

ObjectId               : {1}\\UTS\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="{90e94a88-46
                         96-11ee-aa2e-806e6f6e6963}:ST:{3cecad52-10e8-4c5a-9caa-66dee2c0b9ec}{45e81c6a-9971-4b4e-9152-8
                         4cdf2400557}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {45e81c6a-9971-4b4e-9152-84cdf2400557}
AllocatedSize          : 0
AllocationUnitSize     : Auto
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 0
FriendlyName           : Parity
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : 5
NumberOfDataCopies     : 1
NumberOfGroups         : Auto
ParityLayout           : Rotated Parity
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Parity
Size                   : 0
Usage                  : Data
PSComputerName         :

ObjectId               : {1}\\UTS\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="{90e94a88-46
                         96-11ee-aa2e-806e6f6e6963}:ST:{3cecad52-10e8-4c5a-9caa-66dee2c0b9ec}{70080256-3972-405d-a744-2
                         3904ac34a49}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {70080256-3972-405d-a744-23904ac34a49}
AllocatedSize          : 11428907974656
AllocationUnitSize     : 268435456
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 14286134968320
FriendlyName           : MirAccPar_Parity
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : 5
NumberOfDataCopies     : 1
NumberOfGroups         : 1
ParityLayout           : Rotated Parity
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Parity
Size                   : 11428907974656
Usage                  : Data
PSComputerName         :

ObjectId               : {1}\\UTS\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.ObjectId="{90e94a88-46
                         96-11ee-aa2e-806e6f6e6963}:ST:{3cecad52-10e8-4c5a-9caa-66dee2c0b9ec}{ff5d18e5-e2bc-429e-8fa1-d
                         c368447328c}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {ff5d18e5-e2bc-429e-8fa1-dc368447328c}
AllocatedSize          : 214748364800
AllocationUnitSize     : 536870912
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : PhysicalDisk
FootprintOnPool        : 429496729600
FriendlyName           : MirAccPar_Mirror
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : 2
NumberOfDataCopies     : 2
NumberOfGroups         : 1
ParityLayout           :
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Mirror
Size                   : 214748364800
Usage                  : Data
PSComputerName         :
Objects MirAccPar_Mirror and MirAccPar_Parity have desired sizes.

As for now, everything works flawless, but my new array is still under evaluation tests. First time ReFS in "production".