Norco 24-bay Franken-DAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Dave Corder

Well-Known Member
Dec 21, 2015
381
274
63
42
I've been running a personal file server of some kind for ~20 years now...I think I started with a bunch of 160 GB Maxtor IDE drives and an 8-port IDE hardware RAID controller and it's been evolving ever since. These days, I'm all about ZFS (via TrueNAS).

First, the goods:

PXL_20240827_221302929.jpgPXL_20240827_221226839.jpgPXL_20240827_221219258.jpgPXL_20240827_221157907.jpgPXL_20240827_221135771.jpgPXL_20240827_221120190.jpgPXL_20240827_221116280.jpgPXL_20240827_221110756.MP.jpgPXL_20240827_221105599.jpgPXL_20240827_221308169.jpg

Higher-resolution images at Imgur:
Right now my setup consists of a Dell R720 running TrueNAS Scale, with a couple LSI SAS 9202-16e cards connected to a couple 24-bay drive server cases that are functioning now just as DAS boxes. I just finished a minor overhaul of one of those DAS units, and wanted to share what I had done.

The main chassis is a Norco RPC-4224 24-bay server case (purchased back in 2015). It has 24 SAS/SATA hot-swap bays. Each row of four drives is on a separate backplane, which has an SFF-8087 connector to attach to whatever controller you are using and a standard 4-pin Molex power connector.

The six backplanes are connected to a Dell 9-port SAS2 expander (6 internal ports SFF-8087, 3 external SFF-8088) purchased off eBay. These have been floating around for a while now and, IMHO, make pretty good expanders for DIY DAS/NAS projects. See this thread for more information.

The drives, expander, and cooling are powered by a 460 W HP CommonSlot server PSU with a GPU miner breakout board. The breakout board provides a dozen GPU connectors with +12V and GND; to get the +5V needed by the drives and the expander, I use a few 75W DC-DC converters - one per pair of backplanes. I did some quick math using power numbers from a couple different drives I have in my ZFS pool, and this should give me more than enough headroom on the 5V side. I 3D printed an ATX to HP PSU bracket and then designed and printed a support bracket for the breakout board as well.

I used some leftover modular PSU cables and other bits and bobs to make custom cables to wire up the Molex power connectors for the backplanes to the DC-DC converters and the breakout board.

Previously, this box used a consumer general-purpose ATX PSU. With this overhaul, I switched to the HP PSU + breakout board for a couple of reasons. One was (hopefully) improved efficiency, as well as simpler internal wiring and more available space inside the chassis.

One of the reasons I wanted more space inside the chassis was for the expander - I had my SFF-8088 cables routing through the ATX I/O opening on the rear panel to the expander, but the expander was just on plastic standoffs and not really attached to anything. One of the things I did design and 3D print a shield for the I/O space for the expander's SFF-8088 ports. Then I made some standoffs of the necessary height, attached them to the expander, and glued them to the bottom of the chassis so everything would stay put and look nice and neat.

And one of the main reasons for this overhaul in the first place was to improve the airflow and cooling. It was, shall we say, sub-optimal. I have the 120mm fan wall for the Norco, but when I originally put this together, it lived in my home office and I prioritized noise over cooling and allowed my drives to run a little bit hotter than I probably should have. But I've moved since then, and now my rack basically has its own room and noise is not as much of a concern.

I had used 3 120mm Noctua fans on the fan wall and 2 80mm Noctua fans on the rear panel. I replaced those with 3x 120mm*38mm high speed/high pressure server fans on the fan wall, and then 2x 80mm*38mm and 1x 92mm*38mm fan on the rear wall (the two 80mm fans are in the factory mounting locations; the 92mm fan is on a 3D-printed bracket that occupies 5 of the 7 PCI slots). The fans are all 4-pin PWM fans.

Those fans move a metric crapload of air. They're definitely overkill, but I wanted to make sure I wasn't going to wind up back in the same situation of inadequate airflow, and I decided to go with these fans (and slow them down if necessary) rather than try to work out exactly how much airflow I needed and find a set of fans to give me exactly that amount (i.e., I went the lazy route).

To control the fans, after a couple of tries I found a suitable PWM fan speed controller. Turns out, most of the cheap/common "PWM fan controllers" just PWM the +V pin on the fan header to undervolt the fan(s) and control their speed. Eventually I found some that actually leave the +12V line alone and do a proper PWM signal on the actual PWM input pin on the fans to control their speed. These worked great at slowing the fans down to what I think is a good balance of airflow and noise, but I can always adjust later if necessary. I decided on one speed control for the fan wall and one for the rear fans. The controllers I bought have the adjustment potentiometer on a long 3-wire cable, so once again I fired up the 3D printer and designed and printed a bracket to hold them in an empty PCI slot for easy access. As a bonus, the ones I went with use a 6-pin GPU power connector, so it was easy to connect them into the breakout board.

Last but not least, one of the annoying things about this particular chassis is that, when using the supposedly official rails (RL-26, IIRC), the case doesn't actually line up properly in my rack - it sat low by about 4 mm, so I had to sacrifice the space below it. This always bugged me, so while I had it out of the rack, I modified the mounting holes on the inner rails to reposition the case such that it now lines up properly in my rack. (I bought and put a 1/8" carbide end mill bit in my drill press and used it as a poor man's CNC thingy to slowly enlarge the holes in the right direction.)

I spent the last few days going on an Amazon shopping spree (not gonna lie, some of my parts choices were influenced by what I could get next day via Amazon Prime, since I was making this all up as I went along) and performing the overhaul. I just added it back into my rack tonight and (fingers crossed), so far so good. After a few zip ties to tidy things up on the inside, I'm quite pleased with how it turned out.

 
Last edited:

Dave Corder

Well-Known Member
Dec 21, 2015
381
274
63
42
One thing I'd like to do in the future is sort of a poor-man's IPMI solution.

Basically, I want to be able to power it on remotely, monitor temperatures inside, and monitor and control the fan speeds.

It'd be pretty straightforward to do this with an ESP32 board with Ethernet, a few off-the-shelf temperature sensors, and some wiring up of I/O pins as tach inputs and PWM outputs (the ESP32 has hardware PWM generators) - I've already done enough research to have an idea of the individual components; it's mostly a matter of integrating them all together and then either writing some custom software in PlatformIO or (more likely for me at the moment) a device configuration for ESPHome. Power it off the 12VSTBY pin on the PSU power connector, and I'd be good to go.
 

itronin

Well-Known Member
Nov 24, 2018
1,353
896
113
Denver, Colorado
really great re-use of parts. nice and clean!

How much blood did you leave in that chassis? :p I think I was up to half a pint in the norco I had (before selling it).
 

Dave Corder

Well-Known Member
Dec 21, 2015
381
274
63
42
really great re-use of parts. nice and clean!

How much blood did you leave in that chassis? :p I think I was up to half a pint in the norco I had (before selling it).
Yeah, I've got two of these chassis, and man do they have some rough edges. The build quality (and quantity of blood lost) are night and day with my Super Micro CSE-846 case (but so is the price, especially these days).
 
  • Like
Reactions: itronin

pranch

New Member
Oct 15, 2024
1
0
1
love this, i have two of the same chassis where i did almost identical to you

similar chassis are so expensive now, unless you want a Netapp which is super loud, power hungry, and also sort of pricey

i run mine on a VM with pci passthrough to a controller board. Going to keep this running until it all dies lol
 

Dave Corder

Well-Known Member
Dec 21, 2015
381
274
63
42
love this, i have two of the same chassis where i did almost identical to you

similar chassis are so expensive now, unless you want a Netapp which is super loud, power hungry, and also sort of pricey

i run mine on a VM with pci passthrough to a controller board. Going to keep this running until it all dies lol
Yeah, I can't believe there isn't really anyone jumping in to fill in the gab left by Norco folding, at least in the US. I see a number of options on AliExpress for 4U/24-bay chassis, though the shipping on those things is pretty high. I'll probably keep my Norcos running until they die as well.
 

hmw

Well-Known Member
Apr 29, 2019
650
271
63
One thing I'd like to do in the future is sort of a poor-man's IPMI solution.

Basically, I want to be able to power it on remotely, monitor temperatures inside, and monitor and control the fan speeds.

It'd be pretty straightforward to do this with an ESP32 board with Ethernet, a few off-the-shelf temperature sensors, and some wiring up of I/O pins as tach inputs and PWM outputs (the ESP32 has hardware PWM generators) - I've already done enough research to have an idea of the individual components; it's mostly a matter of integrating them all together and then either writing some custom software in PlatformIO or (more likely for me at the moment) a device configuration for ESPHome. Power it off the 12VSTBY pin on the PSU power connector, and I'd be good to go.
Sipeed NanoKVM and Sipeed NanoKVM-PCIe

Sipeed is (slowly) open sourcing the software stack over concerns it's based in China etc
 

kapone

Well-Known Member
May 23, 2015
1,371
814
113
If you had managed to obtain the power distribution boards (minor surgery needed to make the PSUs work like ATX PSUs) that go with the HP PSUs...this would have been even easier. :) The boards provide 5v as well, no need for DC convertors.

I converted my 48 bay Chenbro chassis using the same PSUs (except two of them, one per 24 bays). That chassis was a snap to work with, since it had the expanders already AND the expanders have variable fan control.

I also hooked in 12v triggers for the PSUs (Add2PSU I think?) so that they turn on when the motherboard PSU/chassis turns on.