Is there a hardware RAID card that can expand RAID 10

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Eds89

Member
Feb 21, 2016
64
0
6
35
Hi,

I just shelled out for a 24 bay chassis and Intel 24 port expander, to compliment my LSI 9260-8i, with the hope I could plug in some additional drives and expand a RAID 10 array.

As it turns out, the LSI 9260 can only expand RAID 5 and 6.
Is this just a limitation of the LSI card, or does this apply to most hardware RAID cards?
Our HP P2000 SAN at work is configured in RAID 10, and new drives/spans can be added without issue, so imagine it is a limitation of my particular card?

If so, can anyone offer any alternatives for hardware RAID cards that can expand RAID 10 arrays?

Thanks
Eds
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I don't know that card but all the HP cards I have used can expand. Simple I would have thought but I guess now, I would have made the assumption all raid cards could expand raid 10
 

Eds89

Member
Feb 21, 2016
64
0
6
35
As would I.

Not sure if this comes down to a hardware or software limitation. If anyone else has come against this before I'd love to know.

I would try to contact Broadcom for direct support, but I think this is a Dell rebadged card, so won't give me support, and the serial isn't found on Dell's support site :(
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
Our HP P2000 SAN at work is configured in RAID 10, and new drives/spans can be added without issue, so imagine it is a limitation of my particular card?

If so, can anyone offer any alternatives for hardware RAID cards that can expand RAID 10 arrays?
What operating system / filesystem are you using? When you're talking about potentially 24 drives, that's both a lot of work for the controller to keep track of, and can create a volume that can be extremely unpleasant to work with at the operating system level (fsck-ing a 64TB UFS2 volume is nobody's idea of fun.

Fancy (expensive) storage systems from HP / EMC / Netapp / etc. generally use custom (or at least heavily-customized) fileystems to take advantage of their fancy controller hardware.

When doing this on commodity hardware, your choices are somewhat more limited. I like ZFS on FreeBSD (FreeNAS uses a very similar codebase). ZoL (ZFS on Linux) is also a possibility if you're more of a Linux user, though I'm not as familiar with feature parity over there. I can tell you that I have a number of pools larger than 128TB and have a bid out to build a master / slave redundant pair of half-petabyte ZFS servers. The existing and proposed systems all use IT-mode controllers and let ZFS deal with the "RAID"-like and "mirroring"-like functions of traditional RAID. With decent hardware, even the "baby" 32TB pools provide read / write performance exceeding 700Mbyte/sec for as long as desired - as part of burn-in we run them flat-out for days with no performance degradation.

You might want to take a step back and evaluate what essential features you need, what features would be nice to have but would not impact the project if you couldn't deliver some or all of them, Then see what is iut there that can accomodate all your essential features and at least some of your optional features.

My background is in ZFS, so I freely admit to the "when the only tool you have is a hammer, everything looks like a nail" logical trap. There are enough other folks here doing non-ZFS things that you'll likely get some good ideas about alternative solutions.

For my ZFS experiences at the 32TB and 128TB level, take a look at my RAIDzilla II project and my RAIDzilla 2.5 upgrade. The forthcoming RAIDzilla III will be a 12gbit/sec expander-based system with a design goal of 1.2PB in 12RU of rack space, capable of saturating a pair of 40GbE links to clients.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
I agree, software based RAID solutions have matured and there's really little need now, save for a few niche cases, to use hardware based RAID at all. The problem with RAID 10 expansion generally, as I understand it anyway, is the lack of parity on the disks. Basically in order to expand the array, you essentially have to destroy your stripes and re-create them on the fly. With parity based RAID implementations, like RAID 5, 6 etc, all those stripes can be re-created on the fly using parity calculations done on the controller and then get written to the disks. Either way round, the stripes are destroyed and re-created. Software based RAID obviously has some advantages here, but essentially the same end result. With RAID 10, it's is usually simpler, quicker and much more reliable to destroy the existing array and rebuild it from a back-up when you have added your new disks :)