Learning Storage Spaces, poor 4k writes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Hi,

I'll preface this with I've not worked with storage spaces until a like 2 days ago. I wanted to see how it performed in server 2016. I'm working my way through basic mirrors, onto write-back cache/read cache, and tiered storage. I'm NOT doing S2D testing (not interested until the licensing drops significantly).

So I created a mirror of 4 SSD, strictly default settings (2 columns, default wbc, etc). When running AS-SSD I see similar performance to what I'd get with a cachless RAID card, expect for 4K. 4K writes are, at best, HALF of what they should be and, if I remember correctly, the reads were also not where they should have been.

The disks are 960GB Samsung PM853T.

Something is telling me this might be an issue with sector sizes...? That link says that all SSD's are 4k, but these are not showing that way.

Code:
Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft

FriendlyName         Manufacturer Model            PhysicalSectorSize LogicalSectorSize
------------         ------------ -----            ------------------ -----------------
ATA SAMSUNG MZ7GE960 ATA          SAMSUNG MZ7GE960                512               512
ATA SAMSUNG MZ7GE960 ATA          SAMSUNG MZ7GE960                512               512
ATA SAMSUNG MZ7GE960 ATA          SAMSUNG MZ7GE960                512               512
ATA SAMSUNG MZ7GE960 ATA          SAMSUNG MZ7GE960                512               512

And this is what the vDisk looks like after creation.
Code:
Get-VirtualDisk | select FriendlyName, PhysicalSectorSize, LogicalSectorSize | ft

FriendlyName    PhysicalSectorSize LogicalSectorSize
------------    ------------------ -----------------
SSD_datastore01               4096               512
I do the following to create the pool and mirrored vdisk of SSD's. I have tested with NTFS and explicitly setting the type of provisioning and it made no difference.

#Create storage pool from all 4 SSD's
New-StoragePool -FriendlyName "vmpool" -StorageSubSystemUniqueId (Get-StorageSubSystem -FriendlyName "Windows Storage*").UniqueId -PhysicalDisks (Get-PhysicalDisk -CanPool $true | ? MediaType -eq "SSD")


#Create mirrored vDisk using all the space on the SSD's
New-VirtualDisk -FriendlyName "SSD_datastore01" -StoragePoolFriendlyName "vmpool" -UseMaximumSize -ResiliencySettingName Mirror

#Get vDisk number
Get-VirtualDisk -FriendlyName "SSD_datastore01" | Get-Disk

#Initialize disk
Initialize-Disk -Number 9

#Add partition to disk
New-Partition -DiskNumber 9 -UseMaximumSize -AssignDriveLetter

#Format partition
Format-Volume -DriveLetter E -FileSystem ReFS -NewFileSystemLabel "SSD_Datastore01"
 
Last edited:

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Info below, had to remove empty fields and stuff to try to make it fit in a post.
Here is a list output of the physical disks
Get-PhysicalDisk | ? MediaType -EQ "SSD" | fl

UniqueId : 50025388002E4180
FriendlyName : ATA SAMSUNG MZ7GE960
HealthStatus : Healthy
Manufacturer : ATA
Model : SAMSUNG MZ7GE960
OperationalStatus : OK
AllocatedSize : 959656755200
BusType : SAS
CannotPoolReason : In a Pool
CanPool : False
DeviceId : 1
FirmwareVersion : 4F3Q
IsPartial : False
LogicalSectorSize : 512
MediaType : SSD
PhysicalSectorSize : 512
Size : 960193626112
SpindleSpeed : 0
SupportedUsages : {Auto-Select, Manual-Select, Hot Spare, Retired...}
UniqueIdFormat : FCPH Name
Usage : Auto-Select
VirtualDiskFootprint : 958583013376
ClassName : MSFT_PhysicalDisk

UniqueId : 50025388002E4189
FriendlyName : ATA SAMSUNG MZ7GE960
HealthStatus : Healthy
Manufacturer : ATA
Model : SAMSUNG MZ7GE960
OperationalStatus : OK
AllocatedSize : 959656755200
BusType : SAS
CannotPoolReason : In a Pool
CanPool : False
DeviceId : 2
FirmwareVersion : 4F3Q
IsPartial : False
LogicalSectorSize : 512
MediaType : SSD
OtherCannotPoolReasonDescription :
PhysicalSectorSize : 512
Size : 960193626112
SpindleSpeed : 0
SupportedUsages : {Auto-Select, Manual-Select, Hot Spare, Retired...}
UniqueIdFormat : FCPH Name
Usage : Auto-Select
VirtualDiskFootprint : 958583013376
ClassName : MSFT_PhysicalDisk

UniqueId : 50025388002E4182
FriendlyName : ATA SAMSUNG MZ7GE960
HealthStatus : Healthy
Manufacturer : ATA
Model : SAMSUNG MZ7GE960
OperationalStatus : OK
AllocatedSize : 959656755200
BusType : SAS
CannotPoolReason : In a Pool
CanPool : False
DeviceId : 3
FirmwareVersion : 4F3Q
IsPartial : False
LogicalSectorSize : 512
MediaType : SSD
PhysicalSectorSize : 512
Size : 960193626112
SpindleSpeed : 0
SupportedUsages : {Auto-Select, Manual-Select, Hot Spare, Retired...}
UniqueIdFormat : FCPH Name
Usage : Auto-Select
VirtualDiskFootprint : 958583013376
ClassName : MSFT_PhysicalDisk

UniqueId : 50025388002E417F
FriendlyName : ATA SAMSUNG MZ7GE960
HealthStatus : Healthy
Manufacturer : ATA
Model : SAMSUNG MZ7GE960
OperationalStatus : OK
AllocatedSize : 959656755200
BusType : SAS
CannotPoolReason : In a Pool
CanPool : False
DeviceId : 0
FirmwareVersion : 4F3Q
IsPartial : False
LogicalSectorSize : 512
MediaType : SSD
OtherCannotPoolReasonDescription :
PhysicalSectorSize : 512
Size : 960193626112
SpindleSpeed : 0
SupportedUsages : {Auto-Select, Manual-Select, Hot Spare, Retired...}
UniqueIdFormat : FCPH Name
Usage : Auto-Select
VirtualDiskFootprint : 958583013376
ClassName : MSFT_PhysicalDisk


Here is a list output of the vDisk
Get-VirtualDisk | fl

UniqueId : DF64B8E4C4894E4ABDFD8B128AB6E460
Access : Read/Write
AllocatedSize : 1915555414016
AllocationUnitSize : 1073741824
DetachedReason : None
FaultDomainAwareness : PhysicalDisk
FootprintOnPool : 3834332053504
FriendlyName : SSD_datastore01
HealthStatus : Healthy
Interleave : 262144
IsDeduplicationEnabled : False
IsEnclosureAware : False
IsManualAttach : False
IsSnapshot : False
IsTiered : False
LogicalSectorSize : 512
NumberOfAvailableCopies :
NumberOfColumns : 2
NumberOfDataCopies : 2
NumberOfGroups : 1
OperationalStatus : OK
OtherOperationalStatusDescription :
ParityLayout : Unknown
PhysicalDiskRedundancy : 1
PhysicalSectorSize : 4096
ProvisioningType : Fixed
ReadCacheSize : 0
RequestNoSinglePointOfFailure : False
ResiliencySettingName : Mirror
Size : 1915555414016
UniqueIdFormat : Vendor Specific
Usage : Other
WriteCacheSize : 1073741824
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
you need to change a setting in the storagepool.

-LogicalSectorSizeDefault 512

Chris
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
you need to change a setting in the storagepool.

-LogicalSectorSizeDefault 512

Chris
I'm not sure if that is the problem though. I just checked and it looks like LogicalSectorSize is 512.

Code:
Get-StoragePool "vmpool" | ft friendlyname, logicalsectorsize, physicalsectorsize

friendlyname logicalsectorsize physicalsectorsize
------------ ----------------- ------------------
vmpool                     512               4096

And this is how bad performance looks (5MB/s 4k writes)
as-ssd-bench Microsoft Virtua 8.23.2016 10-20-44 AM.png as-ssd-bench Microsoft Virtua 8.23.2016 10-20-33 AM.png

Compared to performance I've seen using a RAID10 on a cachless controller.
zPSR3Xm.png
or the same RAID10 tested from within a vmware VM.
oWTt8H7.png
 
Last edited:

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Similar poor performance running crystaldiskmark and ATTO.
I'm looking at this compared to what @PigLover got with a 4 disk mirror shown HERE (realizing it is different disks and controllers and on 2016 not 2012R2, but his numbers aren't far off what I get with a RAID10 using the cacheless controller instead of storage spaces).

Capture2.PNG
Capture.PNG
Code:
-----------------------------------------------------------------------
CrystalDiskMark 5.1.2 x64 (C) 2007-2016 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :  1050.219 MB/s
  Sequential Write (Q= 32,T= 1) :   577.854 MB/s
  Random Read 4KiB (Q= 32,T= 1) :   130.650 MB/s [ 31897.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) :    92.216 MB/s [ 22513.7 IOPS]
         Sequential Read (T= 1) :  1024.482 MB/s
        Sequential Write (T= 1) :   320.385 MB/s
   Random Read 4KiB (Q= 1,T= 1) :    12.888 MB/s [  3146.5 IOPS]
  Random Write 4KiB (Q= 1,T= 1) :    17.435 MB/s [  4256.6 IOPS]

  Test : 4096 MiB [E: 3.1% (0.9/29.8 GiB)] (x5)  [Interval=5 sec]
  Date : 2016/08/23 10:50:45
    OS : Windows Server 2016 Server Standard (full installation) [10.0 Build 14300] (x64)
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
This kind of performance wasn't the main reason I've moved away from SS - but it certainly contributed. More troublesome for me was license costs of keeping a Server 2012R2/Hyper-V cluster "legal" and concurrent improvements in the open-source alternatives.

Also - I don't know if this is improved in SS-Direct and/or Server 2016 - but I was also disappointed by the mysteriously missing free-space in my Parity arrays. Never did get an understanding of that and frankly gave up trying.

In general, I find SS to be a GREAT idea, poorly implemented, poorly documented, and encumbered by onerous licensing. Perhaps typical of MS.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Get-PhysicalDisk |Get-StorageReliabilityCounter | Sort-Object DeviceId | ft DeviceId,*Laten*,Temp*,Power*,*Error*,*Start*

Get-PhysicalDisk |Get-StorageReliabilityCounter | Sort-Object DeviceId | ft DeviceId,*Laten* -auto

let's see if one disk is giving you problems

Chris
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Get-PhysicalDisk |Get-StorageReliabilityCounter | Sort-Object DeviceId | ft DeviceId,*Laten*,Temp*,Power*,*Error*,*Start*
Code:
DeviceId FlushLatencyMax ReadLatencyMax WriteLatencyMax Temperature TemperatureMax PowerOnHours
-------- --------------- -------------- --------------- ----------- -------------- ----------
0                     38            118             101           0              0       8289
1                     39            116             114           0              0       7933
2                     39            118             115           0              0       7933
3                     38            118             115           0              0       8289
Get-PhysicalDisk |Get-StorageReliabilityCounter | Sort-Object DeviceId | ft DeviceId,*Laten* -auto
Code:
DeviceId FlushLatencyMax ReadLatencyMax WriteLatencyMax
-------- --------------- -------------- ---------------
0                     38            118             101
1                     39            116             114
2                     39            118             115
3                     38            118             115
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Doesn't look like a physical disk issue to me....

Could it have something to do with the default WBC? I don't really understand the purpose of a 1GB WBC being generated when the entire array is SSD.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
That was my thought. those numbers look good.

Can you create new vdisks and specify different WBC sizes. and see if you get different results? basically it is writing the data to SSD twice. once to the WBC and again to the SSD as data.

Chris
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Created new vDisk using a 100GB WBC, same performance.

New-VirtualDisk -FriendlyName "SSD_datastore01" -StoragePoolFriendlyName "vmpool" -UseMaximumSize -ResiliencySettingName Mirror -WriteCacheSize 100GB
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
So I went back and format-volume as NTFS instead of ReFS. 4K writes went from ~5MB/s to ~15MB/s. I could have sworn I was getting 15MB/s with ReFS at some point as well, but can't anymore. Still piss poor either way.
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
So, I realize I didn't mention that I'm running hyperv server 2016 on the physical server. So Storage Spaces is setup at that level. To test I was using a windows server 2016 VM so I could run the GUI test tools. I was thinking that perhaps the problem resides at the VM level.

So I went about running some tests with DiskSpd to avoid problems at the VM level. To mimic the 4K tests that AS-SSD, CrystalDiskMark, etc do I ran the following command. If this doesn't look right someone let me know.
Code:
diskspd.exe -c1G -w100 -t1 -o1 -b4K -r -h -L -D E:\IO2.dat
-c1G = 1G file size
-w100 = 100% writes
-01 = 1 outstanding I/Os per target, per worker thread
-b4k = 4K block size
-t1 = 1 Worker thread per test file target
-h = disable software & hardware caching
-L = Capture latency
Some details on flags HERE.

The takeaway was this;
Code:
Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       378757120 |        92470 |      36.07 |    9233.50 |    0.104 |      36.33 |     0.011 | E:\IO2.dat (1024MB)
------------------------------------------------------------------------------------------------------------------
That's double the best results I've had inside the VM. This is making me lean towards the issue being caused by the VM. I'm not sure WHY or how to resolve that though. Perhaps I'm not giving it enough CPU or memory (set to dynamic) and it is bottlenecking? I'd hope that since it is the only VM on the box and I'm not limiting it, that if it needed the resources it would grab them. Then again I've noticed, especially with memory, that hyper-v has never been as intelligent as ESXi with managing resources (ie. memory ballooning). Then again perhaps I have the storage setup incorrectly in the VM. Would love to hear from anyone that might have an opinion.

Full output below:
Code:
Command Line: E:\Diskspd-v2.0.17\amd64fre\diskspd.exe -c1G -w100 -t1 -o1 -b4K -r -h -L -D E:\IO2.dat

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        gathering IOPS at intervals of 1000ms
        random seed: 0
        path: 'E:\IO2.dat'
                think time: 0ms
                burst size: 0
                software cache disabled
                hardware write cache disabled, writethrough on
                performing write test
                block size: 4096
                using random I/O (alignment: 4096)
                number of outstanding I/O operations: 1
                thread stride size: 0
                threads per file: 1
                IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time:       10.01s
thread count:           1
proc count:             32

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  66.78%|   4.37%|   62.41%|  33.08%
   1|   0.00%|   0.00%|    0.00%| 100.01%
   2|   0.00%|   0.00%|    0.00%|  99.85%
   3|   0.00%|   0.00%|    0.00%| 100.01%
   4|   0.00%|   0.00%|    0.00%|  99.85%
   5|   0.00%|   0.00%|    0.00%| 100.01%
   6|   0.00%|   0.00%|    0.00%|  99.85%
   7|   0.00%|   0.00%|    0.00%| 100.01%
   8|   0.00%|   0.00%|    0.00%| 100.01%
   9|   0.00%|   0.00%|    0.00%|  99.85%
  10|   0.00%|   0.00%|    0.00%|  99.85%
  11|   0.00%|   0.00%|    0.00%|  99.70%
  12|   0.00%|   0.00%|    0.00%|  99.85%
  13|   0.00%|   0.00%|    0.00%| 100.01%
  14|   0.00%|   0.00%|    0.00%|  99.85%
  15|   0.00%|   0.00%|    0.00%|  99.85%
  16|   0.00%|   0.00%|    0.00%|  99.85%
  17|   0.00%|   0.00%|    0.00%|  99.85%
  18|   0.00%|   0.00%|    0.00%|  99.85%
  19|   0.00%|   0.00%|    0.00%| 100.01%
  20|   0.00%|   0.00%|    0.00%| 100.01%
  21|   0.00%|   0.00%|    0.00%|  99.85%
  22|   0.00%|   0.00%|    0.00%|  99.70%
  23|   0.00%|   0.00%|    0.00%| 100.01%
  24|   0.00%|   0.00%|    0.00%| 100.01%
  25|   0.00%|   0.00%|    0.00%| 100.01%
  26|   0.00%|   0.00%|    0.00%| 100.17%
  27|   0.00%|   0.00%|    0.00%| 100.01%
  28|   0.00%|   0.00%|    0.00%| 100.17%
  29|   0.00%|   0.00%|    0.00%| 100.01%
  30|   0.00%|   0.00%|    0.00%|  99.85%
  31|   0.00%|   0.00%|    0.00%| 100.01%
-------------------------------------------
avg.|   2.09%|   0.14%|    1.95%|  97.85%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       378757120 |        92470 |      36.07 |    9233.50 |    0.104 |      36.33 |     0.011 | E:\IO2.dat (1024MB)
------------------------------------------------------------------------------------------------------------------
total:         378757120 |        92470 |      36.07 |    9233.50 |    0.104 |      36.33 |     0.011

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       0.00 |       N/A | E:\IO2.dat (1024MB)
------------------------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       0.00 |       N/A

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       378757120 |        92470 |      36.07 |    9233.50 |    0.104 |      36.33 |     0.011 | E:\IO2.dat (1024MB)
------------------------------------------------------------------------------------------------------------------
total:         378757120 |        92470 |      36.07 |    9233.50 |    0.104 |      36.33 |     0.011


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |        N/A |      0.092 |      0.092
   25th |        N/A |      0.099 |      0.099
   50th |        N/A |      0.101 |      0.101
   75th |        N/A |      0.103 |      0.103
   90th |        N/A |      0.121 |      0.121
   95th |        N/A |      0.124 |      0.124
   99th |        N/A |      0.128 |      0.128
3-nines |        N/A |      0.220 |      0.220
4-nines |        N/A |      0.407 |      0.407
5-nines |        N/A |      1.180 |      1.180
6-nines |        N/A |      1.180 |      1.180
7-nines |        N/A |      1.180 |      1.180
8-nines |        N/A |      1.180 |      1.180
9-nines |        N/A |      1.180 |      1.180
    max |        N/A |      1.180 |      1.180
 
Last edited:

JayG30

Active Member
Feb 23, 2015
232
48
28
38
It just occurred to me that I can run diskspd inside the windows 2016 VM as well.
And what do you know...looks like the results inside the VM are half the speed.

Code:
Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       173297664 |        42309 |      16.51 |    4226.33 |    0.232 |     219.65 |     0.070 | E:\IO2.dat (1024              MB)
------------------------------------------------------------------------------------------------------------------
total:         173297664 |        42309 |      16.51 |    4226.33 |    0.232 |     219.65 |     0.070

Code:
Command Line: C:\Diskspd-v2.0.17\amd64fre\diskspd.exe -c1G -w100 -t1 -o1 -b4K -r -h -L -D E:\IO2.dat

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        gathering IOPS at intervals of 1000ms
        random seed: 0
        path: 'E:\IO2.dat'
                think time: 0ms
                burst size: 0
                software cache disabled
                hardware write cache disabled, writethrough on
                performing write test
                block size: 4096
                using random I/O (alignment: 4096)
                number of outstanding I/O operations: 1
                thread stride size: 0
                threads per file: 1
                IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time:       10.01s
thread count:           1
proc count:             4

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  24.04%|   2.34%|   21.70%|  76.01%
   1|   0.94%|   0.00%|    0.94%|  99.11%
   2|   0.47%|   0.00%|    0.47%|  99.58%
   3|  12.02%|  10.77%|    1.25%|  88.03%
-------------------------------------------
avg.|   9.36%|   3.28%|    6.09%|  90.68%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       173297664 |        42309 |      16.51 |    4226.33 |    0.232 |     219.65 |     0.070 | E:\IO2.dat (1024              MB)
------------------------------------------------------------------------------------------------------------------
total:         173297664 |        42309 |      16.51 |    4226.33 |    0.232 |     219.65 |     0.070

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       0.00 |       N/A | E:\IO2.dat (1024              MB)
------------------------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       0.00 |       N/A

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       173297664 |        42309 |      16.51 |    4226.33 |    0.232 |     219.65 |     0.070 | E:\IO2.dat (1024              MB)
------------------------------------------------------------------------------------------------------------------
total:         173297664 |        42309 |      16.51 |    4226.33 |    0.232 |     219.65 |     0.070


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |        N/A |      0.142 |      0.142
   25th |        N/A |      0.209 |      0.209
   50th |        N/A |      0.227 |      0.227
   75th |        N/A |      0.241 |      0.241
   90th |        N/A |      0.253 |      0.253
   95th |        N/A |      0.277 |      0.277
   99th |        N/A |      0.420 |      0.420
3-nines |        N/A |      1.078 |      1.078
4-nines |        N/A |      3.110 |      3.110
5-nines |        N/A |      5.628 |      5.628
6-nines |        N/A |      5.628 |      5.628
7-nines |        N/A |      5.628 |      5.628
8-nines |        N/A |      5.628 |      5.628
9-nines |        N/A |      5.628 |      5.628
    max |        N/A |      5.628 |      5.628