Science Experiment

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Quartzeye

New Member
Jul 29, 2013
16
0
1
I was cruising eBay and saw a few items that caused me to ponder upon a bit of an experiment and I wanted to get input from the forum.

The idea is to take an older raid storage array and modify it so that it is relatively up to date. I was originally looking at buying a Norco 24-Disk case and building out a storage server running two 12 disk raid 60 arrays. I was thinking that 24 2TB drives would give me about 40TB of usable storage running at 6GB.

The Norco case alone is @$400 and I have seen posts on the net about reliability issues. I also saw this post where someone took an older storage array and mod'd it.

Are You Looking For A Less Expensive Norco 4220 / 4224 Alternative?

My only issue with this mod is that it is still a 3GB system. So I was wondering if it might be possible to mod the backplane and replace it with (2) backplanes from a C6100 12 disk system. Since it supports 6GB and is a SATA disk in to SATA cable out. I have not checked if physically the backplane will work because I want to see if it would work electrically. Also would the SGPIO have to be connected? If so, it might drive me towards two 12 port raid cards rather that one 24 port card.

Also is the Dell SAS backplane (V3X78) both SAS and SATA? Looking at the owners manual for the C6100 I only found one backplane for the 12 3.5 disk system. There were a couple for the 24 2.5 disk systems.

Overall my idea is to drop in two dual port Infiniband cards into the storage array and directly connect then to my C6100 then PXE boot the (4) blades of my ESX server to drive all my VM's off the storage array, making it diskless. Currently I boot off of usb and run my VM's off of a slower 3GB JBOD server through an Infiniband switch. The switch limits me to 10GB but the cards in the C6100 are dual port 20GB so increasing the bandwidth in the storage server and between it and the C6100 is the goal.
 

Mike

Member
May 29, 2012
482
16
18
EU
Do they have a Sas1 expander backplane? In all other cases you will get sata3 speeds anyway.
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
I also saw this post where someone took an older storage array and mod'd it.

Are You Looking For A Less Expensive Norco 4220 / 4224 Alternative?

My only issue with this mod is that it is still a 3GB system.
I'm familiar with that linked thread from AVSForum. There are a couple of 4U chassis that are referred to in that thread (they were sold by Tamsolutions on eBay). One was made by AIC, the other was from Supermicro.

I'm not at all familiar with the AIC chassis, but I have one of the Supermicro SC846TQ chassis referred to. The backplane in my SC846 chassis is a "passive" backplane, in that it has no expanders built in. This has 24 regular "SATA" connectors that allows quite a bit of flexibility. This backplane is perfectly happy to run at 1.5 or 3.0 or 6.0 Gb/s ( SATA 1, II and III respectively).
I have SATA III SSD's, as well as SATA II and SATA III drives all connected to the backplane and working correctly at their specified speeds (assuming the appropriate controller is used of course). In fact I have an IBM M5014 connected to an Intel RES2SV240 expander which is connected to the backplane via SSF-8087 to SATA breakout cables and at all works perfectly.

I believe there are other SM SAS backplanes for that chassis that do have built in expanders and I would think that the interface speed will be limited by the expander.
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
As you're aiming for huge RAID-6/60, there will need to be a good RAID card and also better than desktop grade drives. That being said, the Chenbro 36-port Expander or a pair of the 24-port jobs will allow your wants.
I also was wondering, did you make a typo? 2x 12-drive RAID-60's or is it supposed to be one big RAID-60 array made up of a pair of RAID-6 arrays?

Norco Chassis are OK, just aim for anything newer that has the yellow backplanes, green ones had issues. Been there, done that and fixed them myself.
 

Quartzeye

New Member
Jul 29, 2013
16
0
1
Thanks for the info. I guess it is a typo as I want to create a two 12 disk raid sets out of the 24 disks.

Granted that reserves four of the disks for parity, two in each set. so the total usable space will two 20TB sets for 40TBs total. Running RAID 60 allows for two drives to go bad before the array becomes unusable and has to be rebuilt. In that case standard 2TB drives , like the WD Red series will be more than adequate. I could make each set a little smaller and use a drive or two as a hot spare but with the high reliability of RAID 60, I feel confident that I can replace bad drives fairly quickly.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I may be confused on terminology but a raid 60 is basically a striped raid 6. Your vulnerability is the loss of two disks in a raid 6 array and at that point, you would be at risk of data loss but can still run. You should rebuild your array if any drives drop regardless though. An worse case is 4 drives out, two from each array.

I generally have minimum of 1 hotspare per 12-14 disks. My arrays are 15x3TB red in a raid 6, 1 hotspare, 2 parity, so 36T usable. We don't often do that anymore (don't use the 45 bay shells much) and on the 36 bay shells do packs of 12 so 27T usable. We then combine them in Linux using LVM instead of raid 0 in hardware.

No matter how fast we can replace the drive, it is still faster to have the spare in the enclosure. Plus that way, we can go on vacations and what not without having to rush back.