I never want to run out of hard drive bays again - starting build.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Background - I have had various "storage servers" over the years, and all have been based on Off-the-shelf chassis/systems. While all have worked fine for the most part, the noise is always there. The intent of this build is to make a seriously quiet, rackmount, lots of bays, storage server.

This is mostly media/personal data/other stuff storage, so, throughput/speed etc are not that critical. Gigabit connectivity is fine for now, and in the future 10gbe can be added if need be.

I acquired a failed company years ago and I still have a bunch of crap sitting in my warehouse. The other day I roamed through it and selected a few pieces. I had 7 of these 2U chassis (made by Ci Design apparently) with 12 hot swap bays each that were sitting, collecting dust. These are weird lil chassis that take two motherboards (one on top of each other with a single PCI each) and coldwatt power supplies. Not usable as is, but hmm...will make a nice storage array.

So, the plan is to take the dremel to these, hack and stack em and come up with a 14U storage enclosure with 84 hard drive bays. :) I recently bought 10x of the Intel/LSI 16 port SAS2 expanders that will be perfect for this. Will be running off of one or may be two M1015 adapters.

Stay tuned... :)
 
Last edited:
  • Like
Reactions: T_Minus

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Fan "wall" planning...

These chassis came with a 4x 80mm fans mid plane each. The good thing is that the chassis drive trays are pretty open with good airflow through them. While I could have 28x 80mm fans running.... :) that'd defeat the purpose. My server room aka part of my unfinished basement, is relatively cool, so I don't need to go crazy, but gotta come up with something.

The chassis are just about 17" wide (and of course 2U). So, having hacked them up so that only the bottom most chassis has a "floor" and the topmost has a "cover", that gives me 24.5" of vertical space.

17" = 431.8mm
24.5" = 622.30mm

I could use a fan wall of:
- 200mm fans - 2 across x 3 high. That would give me a bit of clearance on the side to snake the cables through.
- 140mm fans - 3 across x 4 high. That leaves some extra room at the top, but with a blanking panels, should be workable.

Hmm....I think I need to go shopping for 200mm fans.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Been thinking about the engineering side of cutting up the 7 chassis' as I'm not physically at home right now. I think securing the fan wall to all 7 chassis and making sure it's a tight seal is gonna be a challenge.

Been also thinking about power supply(its) for this. I could go with a single Supermicro 1620w or so, and that "should" be enough, as I'll never write to all 84 disks at the same time (writing takes the most amps on the +5v rail). This will be running Windows Server 2012 R2 with Stablebit Drivepool and I'll be slicing the disks up into 4-5 pools.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
You're sailing close on the power supply if you want things to run cool, best to aim for max 60-70% full duty loading. Oh and assuming you will never write to all of the disks at the same time is going to come back and bite yer ass. Build and rate stuff based on full load requirements :)
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
How about 1psu per 4u of drives instead of a couple of huge ones?
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
You're sailing close on the power supply if you want things to run cool, best to aim for max 60-70% full duty loading. Oh and assuming you will never write to all of the disks at the same time is going to come back and bite yer ass. Build and rate stuff based on full load requirements :)
Agreed. That's why this is challenging.

However, even without (I'm shooting for with) staggered spin up, once you get beyond a certain number of drives, they are not all gonna startup at the exact same time. Looking at examples of hardware for lots of drives (45drives, Supermicro 90 drive chassis), I think they go with the same thinking as well.

The Supermicro 90 bay chassis "only" has a redundant 2000w PSU (which is rated at far less on 110v vs 230v).
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
How about 1psu per 4u of drives instead of a couple of huge ones?
I'm actually thinking more along the lines of two relatively big power supplies, one each for ~42 drives (assuming a single 1620w, in a redundant setup, doesn't work)
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
After doing some rough calculations...I may have to break this up into two servers, instead of one big ass one. The math is just not working out (and racking one big ass 14U server is gonna be a bit***)

Since I'm using the IBM 46M0997 expanders, these can do 20 drives each (hopefully, still need to test) and I have seven 2U chassis with 12 bays each. What I don't wanna do is cascade expanders (even though I have enough), so with a typical M1015 type card, that's 8 ports that can be expanded to 40 ports by using two expanders, one on each 8087 port.

But...the chassis are 12 bays each...so three of them equal 36 bays. If I combine four chassis on one server, I get 48 bays, but I run out of ports on the HBA and will need to cascade expanders or leave 8 bays disconnected, which seems a waste.

I could of course get a new HBA that has more than 8 internal ports, or get different expanders, or get a new motherboard/cpu combo that can support more pci-e slots so that I can plug in three HBAs...decisions decisions. It's not that I don't want to spend money, but storage is simple...It shouldn't cost an arm and a leg. I'd much rather spend the money on hard drives and/or compute nodes.

Hmm...
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Well...fan decision solved. :) These should be almost silent at the speeds I'm intending them to run at.

Screen Shot 2018-01-05 at 11.09.19 AM.png
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Power supply(ies) decision made. I'll be using 4x HP Common slot 460w platinum power supplies. Each gives me 20a on the +5v rail and more than enough on the +12v rail. The problem in powering this many drives is not only the +12v output, but the +5v is critical too. "KD" was right. Multiple PSUs was the answer.

Screen Shot 2018-01-05 at 2.57.30 PM.png

Screen Shot 2018-01-05 at 2.57.49 PM.png

Each of these will power 20x HDDs, which should be within the parameters of the PSU. And oh...these PSUs and backplanes/PDBs are dead cheap.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Pictures of the actual build coming soon...It's biting cold right now, and I'm not in the mood for wearing layers of clothing to go out and start using the angle grinder...Because I KNOW someone is gonna say...

 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Will be using a Chenbro Chassis Management Board for fan control.



(This hacked up, whatever you wanna call it, is essentially a big ass JBOD enclosure, with 84 bays, 80 functional. Four SAS2 expanders in it, attached to 80 bays, and going out to four SFF-8088 connectors for connectivity to the actual storage server).
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Other bits and pieces...

- 4x Add2PSUs (The enclosure will turn on and off with the server)
- Bare wire for custom harnesses
- Pcie-e powered risers for the expanders (they don't have a molex...)

713hVAuNzhL._SL1500_.jpg

41-48pBShIL.jpg
41iSQWNBslL.jpg
41Lm6wzHLuL.jpg
Screen Shot 2018-01-05 at 4.12.01 PM.png
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Project shelved. The "Supreme Commander" aka my wife said "You have two toddlers...WTF are you thinking? Shut it down. Now."

I couldn't argue.