I've been running a personal file server of some kind for ~20 years now...I think I started with a bunch of 160 GB Maxtor IDE drives and an 8-port IDE hardware RAID controller and it's been evolving ever since. These days, I'm all about ZFS (via TrueNAS).
First, the goods:










Higher-resolution images at Imgur:
Right now my setup consists of a Dell R720 running TrueNAS Scale, with a couple LSI SAS 9202-16e cards connected to a couple 24-bay drive server cases that are functioning now just as DAS boxes. I just finished a minor overhaul of one of those DAS units, and wanted to share what I had done.
The main chassis is a Norco RPC-4224 24-bay server case (purchased back in 2015). It has 24 SAS/SATA hot-swap bays. Each row of four drives is on a separate backplane, which has an SFF-8087 connector to attach to whatever controller you are using and a standard 4-pin Molex power connector.
The six backplanes are connected to a Dell 9-port SAS2 expander (6 internal ports SFF-8087, 3 external SFF-8088) purchased off eBay. These have been floating around for a while now and, IMHO, make pretty good expanders for DIY DAS/NAS projects. See this thread for more information.
The drives, expander, and cooling are powered by a 460 W HP CommonSlot server PSU with a GPU miner breakout board. The breakout board provides a dozen GPU connectors with +12V and GND; to get the +5V needed by the drives and the expander, I use a few 75W DC-DC converters - one per pair of backplanes. I did some quick math using power numbers from a couple different drives I have in my ZFS pool, and this should give me more than enough headroom on the 5V side. I 3D printed an ATX to HP PSU bracket and then designed and printed a support bracket for the breakout board as well.
I used some leftover modular PSU cables and other bits and bobs to make custom cables to wire up the Molex power connectors for the backplanes to the DC-DC converters and the breakout board.
Previously, this box used a consumer general-purpose ATX PSU. With this overhaul, I switched to the HP PSU + breakout board for a couple of reasons. One was (hopefully) improved efficiency, as well as simpler internal wiring and more available space inside the chassis.
One of the reasons I wanted more space inside the chassis was for the expander - I had my SFF-8088 cables routing through the ATX I/O opening on the rear panel to the expander, but the expander was just on plastic standoffs and not really attached to anything. One of the things I did design and 3D print a shield for the I/O space for the expander's SFF-8088 ports. Then I made some standoffs of the necessary height, attached them to the expander, and glued them to the bottom of the chassis so everything would stay put and look nice and neat.
And one of the main reasons for this overhaul in the first place was to improve the airflow and cooling. It was, shall we say, sub-optimal. I have the 120mm fan wall for the Norco, but when I originally put this together, it lived in my home office and I prioritized noise over cooling and allowed my drives to run a little bit hotter than I probably should have. But I've moved since then, and now my rack basically has its own room and noise is not as much of a concern.
I had used 3 120mm Noctua fans on the fan wall and 2 80mm Noctua fans on the rear panel. I replaced those with 3x 120mm*38mm high speed/high pressure server fans on the fan wall, and then 2x 80mm*38mm and 1x 92mm*38mm fan on the rear wall (the two 80mm fans are in the factory mounting locations; the 92mm fan is on a 3D-printed bracket that occupies 5 of the 7 PCI slots). The fans are all 4-pin PWM fans.
Those fans move a metric crapload of air. They're definitely overkill, but I wanted to make sure I wasn't going to wind up back in the same situation of inadequate airflow, and I decided to go with these fans (and slow them down if necessary) rather than try to work out exactly how much airflow I needed and find a set of fans to give me exactly that amount (i.e., I went the lazy route).
To control the fans, after a couple of tries I found a suitable PWM fan speed controller. Turns out, most of the cheap/common "PWM fan controllers" just PWM the +V pin on the fan header to undervolt the fan(s) and control their speed. Eventually I found some that actually leave the +12V line alone and do a proper PWM signal on the actual PWM input pin on the fans to control their speed. These worked great at slowing the fans down to what I think is a good balance of airflow and noise, but I can always adjust later if necessary. I decided on one speed control for the fan wall and one for the rear fans. The controllers I bought have the adjustment potentiometer on a long 3-wire cable, so once again I fired up the 3D printer and designed and printed a bracket to hold them in an empty PCI slot for easy access. As a bonus, the ones I went with use a 6-pin GPU power connector, so it was easy to connect them into the breakout board.
Last but not least, one of the annoying things about this particular chassis is that, when using the supposedly official rails (RL-26, IIRC), the case doesn't actually line up properly in my rack - it sat low by about 4 mm, so I had to sacrifice the space below it. This always bugged me, so while I had it out of the rack, I modified the mounting holes on the inner rails to reposition the case such that it now lines up properly in my rack. (I bought and put a 1/8" carbide end mill bit in my drill press and used it as a poor man's CNC thingy to slowly enlarge the holes in the right direction.)
I spent the last few days going on an Amazon shopping spree (not gonna lie, some of my parts choices were influenced by what I could get next day via Amazon Prime, since I was making this all up as I went along) and performing the overhaul. I just added it back into my rack tonight and (fingers crossed), so far so good. After a few zip ties to tidy things up on the inside, I'm quite pleased with how it turned out.
First, the goods:










Higher-resolution images at Imgur:
Right now my setup consists of a Dell R720 running TrueNAS Scale, with a couple LSI SAS 9202-16e cards connected to a couple 24-bay drive server cases that are functioning now just as DAS boxes. I just finished a minor overhaul of one of those DAS units, and wanted to share what I had done.
The main chassis is a Norco RPC-4224 24-bay server case (purchased back in 2015). It has 24 SAS/SATA hot-swap bays. Each row of four drives is on a separate backplane, which has an SFF-8087 connector to attach to whatever controller you are using and a standard 4-pin Molex power connector.
The six backplanes are connected to a Dell 9-port SAS2 expander (6 internal ports SFF-8087, 3 external SFF-8088) purchased off eBay. These have been floating around for a while now and, IMHO, make pretty good expanders for DIY DAS/NAS projects. See this thread for more information.
The drives, expander, and cooling are powered by a 460 W HP CommonSlot server PSU with a GPU miner breakout board. The breakout board provides a dozen GPU connectors with +12V and GND; to get the +5V needed by the drives and the expander, I use a few 75W DC-DC converters - one per pair of backplanes. I did some quick math using power numbers from a couple different drives I have in my ZFS pool, and this should give me more than enough headroom on the 5V side. I 3D printed an ATX to HP PSU bracket and then designed and printed a support bracket for the breakout board as well.
I used some leftover modular PSU cables and other bits and bobs to make custom cables to wire up the Molex power connectors for the backplanes to the DC-DC converters and the breakout board.
Previously, this box used a consumer general-purpose ATX PSU. With this overhaul, I switched to the HP PSU + breakout board for a couple of reasons. One was (hopefully) improved efficiency, as well as simpler internal wiring and more available space inside the chassis.
One of the reasons I wanted more space inside the chassis was for the expander - I had my SFF-8088 cables routing through the ATX I/O opening on the rear panel to the expander, but the expander was just on plastic standoffs and not really attached to anything. One of the things I did design and 3D print a shield for the I/O space for the expander's SFF-8088 ports. Then I made some standoffs of the necessary height, attached them to the expander, and glued them to the bottom of the chassis so everything would stay put and look nice and neat.
And one of the main reasons for this overhaul in the first place was to improve the airflow and cooling. It was, shall we say, sub-optimal. I have the 120mm fan wall for the Norco, but when I originally put this together, it lived in my home office and I prioritized noise over cooling and allowed my drives to run a little bit hotter than I probably should have. But I've moved since then, and now my rack basically has its own room and noise is not as much of a concern.
I had used 3 120mm Noctua fans on the fan wall and 2 80mm Noctua fans on the rear panel. I replaced those with 3x 120mm*38mm high speed/high pressure server fans on the fan wall, and then 2x 80mm*38mm and 1x 92mm*38mm fan on the rear wall (the two 80mm fans are in the factory mounting locations; the 92mm fan is on a 3D-printed bracket that occupies 5 of the 7 PCI slots). The fans are all 4-pin PWM fans.
Those fans move a metric crapload of air. They're definitely overkill, but I wanted to make sure I wasn't going to wind up back in the same situation of inadequate airflow, and I decided to go with these fans (and slow them down if necessary) rather than try to work out exactly how much airflow I needed and find a set of fans to give me exactly that amount (i.e., I went the lazy route).
To control the fans, after a couple of tries I found a suitable PWM fan speed controller. Turns out, most of the cheap/common "PWM fan controllers" just PWM the +V pin on the fan header to undervolt the fan(s) and control their speed. Eventually I found some that actually leave the +12V line alone and do a proper PWM signal on the actual PWM input pin on the fans to control their speed. These worked great at slowing the fans down to what I think is a good balance of airflow and noise, but I can always adjust later if necessary. I decided on one speed control for the fan wall and one for the rear fans. The controllers I bought have the adjustment potentiometer on a long 3-wire cable, so once again I fired up the 3D printer and designed and printed a bracket to hold them in an empty PCI slot for easy access. As a bonus, the ones I went with use a 6-pin GPU power connector, so it was easy to connect them into the breakout board.
Last but not least, one of the annoying things about this particular chassis is that, when using the supposedly official rails (RL-26, IIRC), the case doesn't actually line up properly in my rack - it sat low by about 4 mm, so I had to sacrifice the space below it. This always bugged me, so while I had it out of the rack, I modified the mounting holes on the inner rails to reposition the case such that it now lines up properly in my rack. (I bought and put a 1/8" carbide end mill bit in my drill press and used it as a poor man's CNC thingy to slowly enlarge the holes in the right direction.)
I spent the last few days going on an Amazon shopping spree (not gonna lie, some of my parts choices were influenced by what I could get next day via Amazon Prime, since I was making this all up as I went along) and performing the overhaul. I just added it back into my rack tonight and (fingers crossed), so far so good. After a few zip ties to tidy things up on the inside, I'm quite pleased with how it turned out.
- Parts List:
- Chassis: Norco RPC-4224
- PSU: HP 460W 499249-001
- HP PSU mining breakout: Hp PSU GPU Mining for sale | eBay
- I have the version with a switch instead of an "on" button, so it turns on automatically when power is applied.
- 75W 12V to 5V DC-DC converters: https://www.amazon.com/gp/product/B07YZDP7Q4/
- 120mm fans: https://www.amazon.com/gp/product/B07Y84W6J7
- 92mm fan: https://www.amazon.com/gp/product/B0B1V5L4WB
- 80mm fans: https://www.amazon.com/gp/product/B07B7JQQHQ
- Fan controllers: https://www.amazon.com/gp/product/B0CF5TRNFJ/
- 6-position GPU power connectors: https://www.ebay.com/itm/150858940754
- Dell SAS Expander: https://www.ebay.com/sch/i.html?_nk...00,5r10n,+6tdvn,+010141900)&_sop=15&_svsrch=1
- Misc GPU power cables as needed
- 3D Printed Parts:
- HP PSU ATX adapter: https://www.printables.com/model/682849-hp-dps1200fbcommon-slot-psu-to-atx-holder
- Breakout Board support bracket: https://www.printables.com/model/682866-hp-common-slot-psu-breakout-board-support-bracket
- 92mm fan bracket for PCI slots: https://www.printables.com/model/965999-pci-slot-fan-mount-92mm-fan-support
- PCI blank for speed controller knobs: https://www.printables.com/model/988797-pci-slot-cover-for-fan-controller-knobs
- I/O shield for Dell SAS expander: https://www.printables.com/model/988800-atx-io-shield-for-dell-sas-expander-5r10n-6tdvn-et
- 3D Printer: Elegoo Neptune 4 Pro
- 3D Filament: Elegoo Rapid PETG
Last edited: