Areca - Volume Cannot Be Extended - Exceed Maximum Clusters

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jtisdale

New Member
Feb 10, 2016
4
0
1
61
I'm running Windows 7 64-bit with an Areca 1882-16i. I first configured an 8x4TB RAID 6 volume using 8 Hitachi SATA III 7200 NAS drives producing a 24TB volume. Textbook. Flawless. Once configured, I copied my precious data from another 8 matching drives hanging off my motherboard's 8 SATA ports onto the RAID 6 volume.

Once the data was comfy and cozy on my Areca array, I migrated the second set of 8 drives off my motherboard and onto the Areca. I followed Areca's protocol and modified the volume set to include the additional 8 drives. So, I had all 16 drives are happily residing on the same volume set.

Areca1.PNG

Then, I go to execute the magical EXTEND function and receive the error message, "The volume cannot be extended because the number of clusters will exceed the maximum number of clusters supported by the file system."

Areca2.PNG

This little caveat somehow eluded all my previous research and slapped me in the face.

Areca3.PNG

So, now I'm not in my happy place. Is my only option to somehow back out the second set of 8 drives from the volume set [without losing any data!] and then create a new volume from them and if so precisely how? I would prefer to have a single volume, but anything is better than listening to half my drives spinning away with nothing to do.

Help greatly appreciated! BTW, Areca tech support kindly replied to my trouble ticket indicating they are on Chinese New Year holiday and cannot help me for another week.

Thanks,
John
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,421
470
83
when you formatted your volume. what was your allocation size?

PS C:\Users\v-chrsmi> fsutil fsinfo ntfsinfo D:
NTFS Volume Serial Number : 0x8076ab1c76ab11c8
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x0000000022eb9fff
Total Clusters : 0x00000000045d73ff
Free Clusters : 0x00000000029ff1bc
Total Reserved : 0x0000000000000f80
Bytes Per Sector : 512
Bytes Per Physical Sector : 512
Bytes Per Cluster : 4096
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x000000000de80000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x0000000000000002
Mft Zone Start : 0x0000000001171500
Mft Zone End : 0x0000000001188ea0
Resource Manager Identifier : some GUID

Chris
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,421
470
83
0-16 TB - 4K sectors
16-32 TB - 8K sectors
32-64 TB - 16K sectors
64-128TB - 32K sectors
128-256TB - 64K sectors

Chris
 

jtisdale

New Member
Feb 10, 2016
4
0
1
61
C:\>fsutil fsinfo ntfsinfo d:
NTFS Volume Serial Number : 0x0412a80a12a7feb4
Version : 3.1
Number Sectors : 0x0000000ae9f36fff
Total Clusters : 0x00000000ae9f36ff
Free Clusters : 0x000000002eb9706d
Total Reserved : 0x0000000000000000
Bytes Per Sector : 512
Bytes Per Cluster : 8192
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x00000002dbc40000
Mft Start Lcn : 0x0000000000060000
Mft2 Start Lcn : 0x0000000000000001
Mft Zone Start : 0x000000001db743e0
Mft Zone End : 0x000000001db77580
RM Identifier: B719E063-C630-11E5-9A10-74D02B28B13F
 

jtisdale

New Member
Feb 10, 2016
4
0
1
61
Chris,

Thanks a bunch for your help. Obviously changing bytes per cluster is a destructive operation. Is there a non-destructive option available or am I going to have to ante up for a bunch more drives onto which to move the data for reconfiguration?

Thanks again,
John
 

Quasduco

Active Member
Nov 16, 2015
129
47
28
113
Tennessee
So, if I understand what you said in OP, you have 16 total drives, but are only using 8 worth of data, yes? If so, use the second set of 8 that are now empty, make the volume with the correct bytes per cluster on that second 8, move things onto that correctly set up 8, then add back the first 8 now emptied onto the correctly set up 8.
 

izx

Active Member
Jan 17, 2016
132
63
28
40
Is my only option to somehow back out the second set of 8 drives from the volume set [without losing any data!] and then create a new volume from them and if so precisely how?
Since you already migrated, extending the Areca raidset and volume set to all the drives, there is NO WAY to back out without losing data.

I'd do what Quasduco suggested.
  • In Windows Disk Manager, create a new volume in the unallocated space, say drive E. Cluster size doesn't matter here.
  • Copy all data from D: to E:. I prefer FastCopy because it preallocates space (no fragmentation) and preserves timestamps and NTFS attributes.
  • Reformat D: with 64k clusters
  • Copy all data back from E: to D:
  • Delete E:
  • Now extend D: into the unallocated space
I'm paranoid so I also make volume-wide pre-copy hashes of all files and then verify them on the destination post-copy before destroying the source. Rather than corruption per se I often find things like missing hard links, etc. I like rhash for this, makes it really easy and fast (but it's CLI only).
 

izx

Active Member
Jan 17, 2016
132
63
28
40
Obviously changing bytes per cluster is a destructive operation. Is there a non-destructive option available....
Officially, yes, but there are various free and non-free utilities that claim to have been doing "lossless" changes for years. Off the top of my head, Acronis Disk Director for one and pretty sure it supported the Areca 12xx series back in the day. But I only did it on MBR (not GPT) partitions and had data backed up. I wouldn't attempt this without a backup of all critical data.

Since you have enough room, just do the volume swap.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
An alternative to rhash that I like is md5deep

Sounds like you have a way out though in terms of how to manage to make it work in one big filesysyem.
 

izx

Active Member
Jan 17, 2016
132
63
28
40
An alternative to rhash that I like is md5deep

Sounds like you have a way out though in terms of how to manage to make it work in one big filesysyem.
I found that for a spinning RAID volume consisting mostly of large (100MB+) files, multi-threaded hashing becomes disk-bound, often reducing total throughput below what the array is capable of on large sequential reads. But single-threaded MD5 hashing can then become CPU-bound.

Although it's single-threaded, I like rhash for this scenario because it supports the Edon-R 512 hash which is just about twice as fast as MD5 (in objective cycles-per-byte). rhash has switches for directing stdout and stderr to file so running it recursive at the root gives you a nice large checksum file. When checking, it has the --skip-ok switch to reduce annoying chatter from verified files. The downside on Windows is that it doesn't support junctions/links or long paths (>255).

The blake2 hash seems promising, as it has special 4-way (blake2bp) and 8-way (blake2sp) parallel versions that will operate in chunks upon the same file instead of the standard one-file-per-thread scenario. The reference executables are pretty barebones though.

I noticed md5deep has a piecewise hashing option now, but it lets you do custom sizes. Will have to experiment with that.
 

jtisdale

New Member
Feb 10, 2016
4
0
1
61
Thanks Chris, Quasduco , izx, and Evan.

The process is well under way and going well. Couldn't have done it without your help.

Thanks again, J