Giant Pooled Drive Help! 800TB

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
Hey guys,

So I make SAN servers and one of our clients has asked for a 100TB SSD Cache volume that connects to 700TB of HDD storage.

The good news is that our SAN software already has the tiering SSD cache code written, but it's not designed to write to multiple drives.

For example, our SSD caching software will move old data from the SSDs onto the HDDs after 12 days of no use. And if the SSD gets to be over 75% full, it will start to move shit off immediately.

The only problem is that the SSD caching software was designed to push the data to a single HDD volume! Well, I have five volumes, each one about 170TB. I can either ingest them via Fiber Channel, iSCSI, or link them as a NAS via 40gigE. Still the SSD caching can only link to one volume.

So I was thinking... how can I make five volumes appear as one volume? Then I thought about DFS but that doesn't seem to work because DFS is only a link to multiple folders withing one SMB path, each different folder would represent a different volume, but the SSD caching software would only be able to write to one HDD volume using DFS because it would have to select a folder from within the DFS namespace which is linked to the HDD volume, you understand me?

Anyways, our SAN developers also have a program that pools volumes together, volumes larger than the 256TB limit that NTFS has set - which our SAN software has to use.

So I'll probably end up using the POOL software then sharing that out via SMB as a NAS, So I can ingest all 5 HDD volumes into one windows server, use our SAN software to pool all 700TB together, then create a NAS share.

I'll then log onto the SSD server and link the NAS share as the HDD volume to send all old or overfull data too. But I was just wondering if anyone had any better idea's on how to get one giant 700TB volume to appear as a single folder via SMB, or iSCSi or Fiber Channel using NTFS.

Thanks!
 

DavidRa

Infrastructure Architect
Aug 3, 2015
297
134
43
Central Coast of NSW
www.pdconsec.net
I can't even begin to think why this is a good idea. There's no way you can chkdsk/fsck this large a volume. You can't back it up effectively. It's completely unmanageable.

Why does it have to be a single block volume? This seems like a really bad plan. Actually scratch that, it's not a plan, it's a pending disaster. Just because the customer asks for it doesn't mean you should provide it.

If you really think this is a good approach we'll need more info (e.g. what are these 100TB/700TB volumes - local disks? mdadm volumes? Storage Spaces? iSCSI LUNs?)
 

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
I think we will use the pooling software which just copies the data from the SSD cache to a single SMB share. Then it is distributed across multipule volumes based on the % of free space.
 

kapone

Well-Known Member
May 23, 2015
1,049
621
113
On a slight tangent...ever heard of Stablebit Drivepool? It can combine Windows "disks" into a pool that appears as single drive.

Edit: Scratch that. I see that you hit the 256TB limit.

Edit2: Scratch Edit1. It might still work. Try it? They have a trial.
 

ecosse

Active Member
Jul 2, 2013
409
81
28
Why not use ReFS?

Stablebit drivepool should work, although I've never really tested how performant it is.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,301
410
83
the largest NTFS volume size is 256TB. you would need to split up your folders.

REFS... no much larger limit...

and why does the SAN software limit you to NTFS?

Chris
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,301
410
83
That means that the developers are depending on NTFS features that are not in ReFS. do you have a list of those (missing) features?

Chris
 

devlinse

New Member
May 1, 2012
8
0
1
Stablebit's overhead is pretty low to negligible.
I only use it in a home 1Gb context (although 10Gb is on the cards) and it’s fine performance wise - have never once thought it was a bottleneck on a 60TB pool. What I like the most though is that because it’s a layer over regular NTFS, if a non-duplicated drive fails, all the other files across the remaining member disks are still accessible.

Having been burnt by a multi-disk failure in the past when using a hardware RAID solution (largely through my own inattention) this is worth a lot to me. Interface could use some work, it has some odd UX flourishes but it’s not something I use regularly.

Depends on your use case, I guess.