Designing 1U/2U/3U/4U rackmount server chassis - these will be going to production - looking for input, ideas, and feedback!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
I my experience fans suck :D at pushing air through hdd "bays", especially with silent consumer fans. I think a setup with the fans behind the hdds pulling air will work better...

Also what happens if not all "bays" are filled with hdds? Vendors like supermicro use dummy trays to fill that place and force air to take the same way as if hdds were in that place.
 

JBOD JEDI

New Member
Feb 5, 2019
9
6
3
Great considerations.

Fan exhaust has more directional flow requiring more distance to sort into different airflow paths than the intake side. Could either build a baffle to move the fans behind the drives, or an easier (less compact) solution is to move the drives further back from the drives. 14xHDD/ITX/CX4170a or 28xHDD/ITX/CX4200a could accommodate plenty of space between the fans and hard drives.

The Supermicro chassis relies on high pressure to push through the tight network of components, wherein any new opening has major affect on pressure and thus the air flow diverts away from the drives and into the lower pressure open spaces. However, in a low pressure design with much larger gaps between drives, no caddies, no backplane, and no cables blocking the air flow paths, there is significantly less air flow diversion away from hard drives into new open spaces because the pressures are relatively the same. But it wouldn't be difficult to use a surface to block open HDD spaces if needed, easier than using dummy trays screwed to caddies for each open slot like Supermicro.
 
  • Like
Reactions: SligerCases

thedman07

New Member
Sep 14, 2020
24
23
3
As for connectivity, you don't need to have a backplane for "Front-Loader" types like this. The drives can slip in and have their asses exposed for connections.
I may be in the minority, but I will not buy a storage case that doesn't have some kind of backplane. Power distribution and data cabling are a nightmare and being able to use SAS cables is more than worth the additional cost.

Maybe you can make it compatible with a supermicro backplane (or maybe even multiples) or something like that. 2u backplanes are not as valuable, so two of those for a 4u case would be great.

An option for a JBOD chassis/disk shelf might be nice too.
 

nabsltd

Active Member
Jan 26, 2022
339
207
43
I may be in the minority, but I will not buy a storage case that doesn't have some kind of backplane.
I'm pretty much the same.

I've lost my appeal for backplanes in a 24/7 NAS because they consume energy, require caddies, restrict air flow, and I'm at the point I've no need for hot swap capability.
For me, 24/7 means you need to run it 24/7, which means you have to have hot-swap to allow for maintenance without downtime.

I think you could accomplish your goal and still keep a backplane by putting the backplane on the bottom of the chassis, and still have the drives top-load like in your picture. You'd only need some very skinny guide rails (no tool needed connection to the drive) that could be less than the width of the drive, and the frame wouldn't block airflow. And, it could still be hot-swap, although you'd have to pull the top off the chassis. This can be done while powered on, though.

Also, as drawn, you'd have to unplug the power cables from every drive to remove just one. You also don't show the data cables, and those can't be daisy-chained, so your very clean picture would soon become a tangled mess.

Also note that depending on the type of backplane, it wouldn't have to consume much power. If it was just a PCB that did exactly what your picture does so that it's just wires to individual drives, then it shouldn't require any power. It would have a power connection, but that's just to power the drives that slide in. I prefer to use SAS expanders because they save me money, since it costs a more for a motherboard that can handle the extra HBA, plus the extra HBA cost, plus the fact that cheap "8i" HBA wouldn't be good enough...you'd need at least one 16i for 20+ disks.

Sure, the expander uses power, but honestly if you are running 20+ disks, the backplane power is nothing compared to the disks. And if you add in the cost of the extra HBA, that's a lot of months of electricity.
 

thedman07

New Member
Sep 14, 2020
24
23
3
A backplane on the bottom of a chassis would be fine. You can buy right angle SAS cables if you need clearance. The backplanes for Norco chassis were fine too (although I think they had some QC issues in the power circuits). They didn't even have expanders on them, although the user could add one. You had one design for different chassis so from a cost perspective, that would be pretty manageable.
 
  • Like
Reactions: SligerCases

SligerCases

New Member
Mar 19, 2022
14
16
3
Nevada
www.sliger.com
The Sliger rack mount enclosures are the best designs I've seen after searching for years.

If you're planning to do a Sliger 5U, I find that 2x 140mm fans spaced apart is the most quiet configuration in a 5U HTPC home build I did recently. I've since converted that project to strictly NAS and am considering a Sliger 4U CX4150a for the HTPC. The simplistic industrial design is very attractive.

However, I've always envisioned something a bit more sleek/cosmetic for high end A/V and home theater racks.. An entirely glass front surface, black, semi-transparent 25%, so you can see illuminated motherboards and components only when powered on. Air drawn in from a 140mm fan on each side of the enclosure.
I wasn't planning on a 5U originally, but I like your idea a lot. If I do wind up doing one it will be for 3x 140mm front fans.

The front panel is also a fun idea. I am just starting to get requests from various people on what they'd like to see for front panels. The current hex pattern is just an easy to produce / decent looking place holder for now.

In addition to using a CX4150a/i for HTPC, I'm considering modifying another CX4150a/CX4170a/CX4200a for NAS.

I've lost my appeal for backplanes in a 24/7 NAS because they consume energy, require caddies, restrict air flow, and I'm at the point I've no need for hot swap capability.

The open design of these enclosures makes it super easy to weld in some guide rails for retaining hard drives in a top-down NAS configuration. The top down configuration has grown on me because there is zero restriction of air flow across the drives, as all of the SATA and power connectors are up and above the air flow path.


View attachment 23407View attachment 23408


View attachment 23409


View attachment 23410

Other notes:

SATA power connectors would be DIY with push-in pass-through connectors 7 drives per rail.

The Deepcool AK620 is testing better than the Noctua NH-D15 and thus better than the recommended NH-D12L, and costs less than either. It's 160mm high but the top cover can be removed for a height of 157mm. May just need to slide the fans down a fin to fit within 158mm required by the case.

The Noctua chromax.black fans don't have that funky brown & beige color.
I love your feed back and ideas. You're hired.

I'll see what I can do to make a mass-HDD bracket version of the case to be something like this.

I will also update the recommended coolers page to suggest the AK620. (I also have to update it to say that the D15 works if you swap the stock fans for 120mm fans.) However sounds like the AK620 might be the best option, particularly at the price.
 

SligerCases

New Member
Mar 19, 2022
14
16
3
Nevada
www.sliger.com
I my experience fans suck :D at pushing air through hdd "bays", especially with silent consumer fans. I think a setup with the fans behind the hdds pulling air will work better...

Also what happens if not all "bays" are filled with hdds? Vendors like supermicro use dummy trays to fill that place and force air to take the same way as if hdds were in that place.
Everything I have read on the Backblaze Pod / 45 Drives case has shown no issues with airflow when drives are not populated. Only thing I would consider is some instructions to evenly space / stagger the drives, or start from middle and span out. Don't just populate all on the left side or something.

A backplane on the bottom of a chassis would be fine. You can buy right angle SAS cables if you need clearance. The backplanes for Norco chassis were fine too (although I think they had some QC issues in the power circuits). They didn't even have expanders on them, although the user could add one. You had one design for different chassis so from a cost perspective, that would be pretty manageable.
I was honestly looking at doing a 45-drives style case with my "backplane" being these right angle SATA adapters and some custom SFF cables to minimize the cable clutter / assembly work.


Given the straight versions have been so reliable this probably would be the most cost effective way to do that style of case.

I'm pretty much the same.


For me, 24/7 means you need to run it 24/7, which means you have to have hot-swap to allow for maintenance without downtime.

I think you could accomplish your goal and still keep a backplane by putting the backplane on the bottom of the chassis, and still have the drives top-load like in your picture. You'd only need some very skinny guide rails (no tool needed connection to the drive) that could be less than the width of the drive, and the frame wouldn't block airflow. And, it could still be hot-swap, although you'd have to pull the top off the chassis. This can be done while powered on, though.

Also, as drawn, you'd have to unplug the power cables from every drive to remove just one. You also don't show the data cables, and those can't be daisy-chained, so your very clean picture would soon become a tangled mess.

Also note that depending on the type of backplane, it wouldn't have to consume much power. If it was just a PCB that did exactly what your picture does so that it's just wires to individual drives, then it shouldn't require any power. It would have a power connection, but that's just to power the drives that slide in. I prefer to use SAS expanders because they save me money, since it costs a more for a motherboard that can handle the extra HBA, plus the extra HBA cost, plus the fact that cheap "8i" HBA wouldn't be good enough...you'd need at least one 16i for 20+ disks.

Sure, the expander uses power, but honestly if you are running 20+ disks, the backplane power is nothing compared to the disks. And if you add in the cost of the extra HBA, that's a lot of months of electricity.
Agree with all your points and I would only design something with the SATA ports at bottom of case for drop in storage.

Only issue with SAS expanders is the current lead time on the controllers, and what I do get quoted on is not cost competitive at all.

Best option in the current supply chain strangled world is probably the SATA right angle adapter I linked above, combine with some custom cables to make it easy to get things wired up and replace failed connectors / connections.

I'll probably be able to work on a 45-drives equivalent case late this year, will post ideas here for feedback!
 
  • Like
Reactions: asmith and Talyrius

thedman07

New Member
Sep 14, 2020
24
23
3
Agree with all your points and I would only design something with the SATA ports at bottom of case for drop in storage.

Only issue with SAS expanders is the current lead time on the controllers, and what I do get quoted on is not cost competitive at all.

Best option in the current supply chain strangled world is probably the SATA right angle adapter I linked above, combine with some custom cables to make it easy to get things wired up and replace failed connectors / connections.

I'll probably be able to work on a 45-drives equivalent case late this year, will post ideas here for feedback!
You don't even have to include expanders. Even if you just built a backplane with the equivalent of a breakout cable and power handling it would be huge. One SAS connector for every 4 drives and molex (or SATA) power connectors would make a huge difference. The customer can source their own expander if they need one. You can look at adding expanders in the future if/when things calm down.

Like I said, I'm just not going to buy a storage chassis where I have to potentially deal with 40+ separate SATA cables and a bunch of power connectors. There's a reason 45 drives has moved away from bulkhead style connectors and cable systems. I can drill some holes in steel and put a bunch of drives in any case I want if I'm willing to deal with a ton of cables. That's not what I'm looking for in a storage chassis.

I don't think you need to keep pricing on this chassis in the sub-$300 range if you provide something other than a mess of cables. No one makes a JBOD/Disk shelf chassis that is worth a damn for home use. They're either proprietary disk shelfs or full depth storage server chassis. Something configured for 16 or 24 drives (vertical is good with me) an SFX power supply and some relatively quiet fans would be great for me. You can throw a Supermicro JBOD controller board in it and you don't even have to deal with a fan/psu/front panel controllers.
 

nabsltd

Active Member
Jan 26, 2022
339
207
43
Best option in the current supply chain strangled world is probably the SATA right angle adapter I linked above, combine with some custom cables to make it easy to get things wired up and replace failed connectors / connections.
What I was thinking was a very basic PC board that had the connectors soldered on. The board would not be an expander, but merely a backplane that has the power inputs (standard 4-pin molex preferred) and the data inputs (any of the 4-lane SAS connectors would be preferred, as you can use plain SATA with reverse breakout cables). That reduces the amount of cabling required to something manageable compared to 20+ separate cables.

An advantage to using a board is that the hot-swap connectors could be placed with a lot of precision, and would not move. Another advantage is that you could have an option to use a true expander if there was demand...just swap out the backplane. Supermicro has used this strategy for years, and it allows them to build very solid cases with a wide variety of options for every need.
 

meep

New Member
Jul 15, 2022
1
3
3
So some thoughts, mainly around my experience with bifurcation solutions.....

One of the main objectives for me is the 'one box to rule them all' principle. Leaving aside concerns about a single point of failure, I have been building servers that run multiple VMs, some headless for home automation but several to drive workstations on remote screens throughout the house from VMs through HDBaseT solutions.

The main challenge to this is that, needing a GPU and USB controller (at least) per vm, you run out of motherboard and case slots pretty quickly!

The solution here is bifurcation, but chassis space is still the challenge.

I know OP has seen it, but for anyone else, here's a blog post about the kind of setup I like to build.

For me, with up to a dozen expansion cards to shoehorn in, some double width, space is the key concern.

A single chassis design with over-specced PCI mounting points is the holy grail. To achieve this, moving the PSU forward in the case is a great solution. This allows the MB to be shifted to one side and all of the remaining back panel to be dedicated to slots, slots, slots. Again, I know OP has seen my impressions of the X-Case X465E, but for me that was almost the ideal solution.

However, even that didn't provide enough space, and even though it's very long, I found it cramped to work in.

My next add-on was to move to a 2 box solution - the X-Case as the main server, but with an additional tethered box to provide storage capacity and PCIe overflow. I modded an Silverstone Grandia GD07 for this (love that case and the removable and flexible drive cage). I removed the back panel and added this full-length PCI bracket arrangement. More than enough PCIe mounting points there! (all this is missing is a blanking plate on 2 or 3 of those slots to house an IEC socket for power).

DSC_0052.JPG

And I think such a 2-box solution has merit. I loved the design of the old Coolermaster HAF Stacker , where an optional add-on chassis was available for huge flexibility. ThermalTake also do this with the sadly discontinued Core X9 (i have one, cavernous, but have never been able to find a second). They also do the P200 pedestal which is an add-on for the W200.

What I'm getting at here is, I'd love to see a chassis system that's modular and flexible. With maybe something as simple as replaceable back panels, a single (stackable?) chassis could start out as a basic server chassis with support for up to eATX, some storage, flexible PSU positioning and as many PCIe mounting points as possible. Then, as second chassis could be added that would provide more PCIe mounting points, additional storage, as second PSU (to power the chassis itself, or for overall redundancy.

Key to this design would be routing for cables between chassis. Any of the DIY solutions I've built to date tend to have a mess of cables running out the back of each of the units. With some (capped) slots top and bottom of the chassis, they could be stacked and cables run internally. Indeed, one idea would be completely removable top and floor in the case. Then a 3U base unit could be easily expanded to become 6U. Or 2/4, or even 4/8.

Anyway, that's probably enough rambling.

Just for reference, I've ended up currently with a Phanteks Enthoo Pro II. It's an amazing case that I'm very, very happy with. It's one of those that supports a secondary mATX system but is super flexible, supports 2 PSU positions and brilliant storage configuration options. Right now, I've leveraged it to provide 3 separate PCIe mounting locations as highlighted below, and that gives me all I need in terms of space to locate the cards I need.

DSC_0053.JPG

One final thought,. One of the most frustrating things about cases is obsolesce. (ThermalTake, I'm looking at you). Often, proprietary case accessories such as drive mounting cages are impossible to purchase, or the entire case goes EOL etc. Best approach would be as much standardisation as possible to allow for cost effective availability and support of aas wide a range of accessories as possible.
 

JBOD JEDI

New Member
Feb 5, 2019
9
6
3
I may be in the minority, but I will not buy a storage case that doesn't have some kind of backplane. Power distribution and data cabling are a nightmare and being able to use SAS cables is more than worth the additional cost.

Maybe you can make it compatible with a supermicro backplane (or maybe even multiples) or something like that. 2u backplanes are not as valuable, so two of those for a 4u case would be great.

An option for a JBOD chassis/disk shelf might be nice too.
You're absolutely right. I abandoned the idea because of this. Here's the "almost" ideal backplane for my use...

backplane.jpg

It's a "dumb" backplane, no chips or major electronic components that constantly consume energy for functions I would never use. The spacing between the rows would accommodate the most air flow, although the design in this photo doesn't take advantage of it.

On a side note, at one point recently I embarked on a project to develop a drop in replacement for the Supermicro 4U backplanes using a PCB production service. The reason for my wanting to do this was not just because of air flow and useless energy-consuming functions, but because backplanes are all missing the one function I want. All of my drives are very rarely used. Even while in sleep mode, a few dozen drives consume a good amount of energy 24/7 for no reason. And it racks up the SMART hours logged on the drives. My plan was to use E-fuses which allow the drives to be completely powered off while not in use, and then allow them to be powered on just when needed. The function of the E-fuse is to limit the power ramp up to the drive so there's no surge current that damages other components and resets the power supply. I did a ton of studying on all of the ramp up current and timing requirements, obtained and researched non-public SATA specification documents. It's a lot of fun stuff to learn and explore, but other things got in the way of the project. Apparently the newer large capacity SuperMicro servers have this function, also using E-fuses, but good luck digging any info out of the company about it.
 
  • Like
Reactions: Maxx_1150

JBOD JEDI

New Member
Feb 5, 2019
9
6
3
I can confirm the DeepCool AK620 CPU cooler (with the cooler tower covers removed and fans lowered a notch or two) does fit in the Sliger 4U rackmount case. However, there is zero space between the lid. So I would not set anything on top of the case that would apply pressure in the area above the CPU. If Sliger were to recommend this CPU cooler in their 4U, I'd suggest this caveat be noted. It works perfect for me as it will be in a rack anyhow. I will be switching from the Noctua NH-D15 to the new DeepCool on all my new builds, and 4U compatibility is a plus.

I'm very pleased with the case itself. Nothing survives shipping in my area, and this was no exception. It came to the door with all of the hardware, heavy rack ears & grips, and PCIe covers all banging around loosely inside, as the retaining zip ties had broke. So I was expecting to open a scratched up mess. But I was surprised to see that the powder coating was so strong that there was not a single nick on anything. I was most impressed. It's the strongest coating I've seen, and I also really like the texture. I also very much like how it's constructed to maximize space better than most cases. From the rear, the tight clean fit of the bottom and side panels against the rear I/O/PCIe Panel.

A significant but fixable issue is the fan shroud design. Pulling air through a grill right against fan intake is always the least efficient configuration. When I received the case, I noticed the typical increased noise and flow restriction immediately. So I decided to take measurements of the noise and flow in this configuration and compare them with the fans reversed with the grill against the exhaust, which is much less affected by physical obstruction than the intake. I did this with the Noctua NF-A12, but different fans vary in how much they are affected. The trend I noticed is that fans which have an emphasis on exhaust pressure at low noise, are more affected by intake restriction. So I also measured with the Noiseblocker eLOOP. These are always unbearably loud with anything right against the intake, but sing wonderfully when free of obstruction on the intake. Turning them around in this case was a night and day difference (but not the mentioned fix).

Here is a video of the noise the Noiseblocker eLOOP makes in the Sliger 4U, in normal and reverse orientation.

Results measured at one foot...

Noiseblocker eLOOP
2100rpm 67dB (47dB when reversed)
1450rpm 54dB (40dB when reversed)

Noctua NF-A12
2100rpm 52dB (44dB when reversed)
1450rpm 41dB (36dB when reversed)

So my fix was to cut the grills from the baffle. Then I obtained the best measurements, and without the fans reversed...

Noctua NF-A12 (grill cut out, not reversed)
2100rpm 43dB
1450rpm 35dB

baffle.jpg

Of course there is second grill integrated into the front cover, but the spacing away from the fan intake makes it not a significant obstruction.
 
Last edited:

jriker1

New Member
Sep 27, 2016
12
2
3
56
For me I'm looking everywhere for a 3U or multiple 3U rack mount cases for my basement rack as it can hold full height cards then. Also one that is deep and doesn't have hard drive cages taking up the whole front of the case. Or if they do, can be removed to support a radiator running across the front area flat. And lastly, one that supports ATX power supplies. I don't want a screamer running in my house with little 40mm fans.
 
  • Like
Reactions: llowrey

JSchuricht

Active Member
Apr 4, 2011
198
74
28
I'm a bit late to this thread but I recently picked up a Sliger CX3152a to replace an old HTPC so here's an idea for a infrared option that would be easy to implement in production with a minor change to the printed USB bracket and front panel cutouts. The infrared is Inteset PC-IRS5-01 which ties into the power switch, power LED and USB.



.Screenshot 2023-03-18 040415.png

Screenshot 2023-03-18 040445.png
Screenshot 2023-03-18 040828.png

The easy way of hooking up the power switch and LED.

Screenshot 2023-03-18 041544.jpg

Optional, a clean method for the power switch and LED.
Screenshot 2023-03-18 041652.jpg

The front plate opening is a bit too narrow so needs clearancing on the left side which I didn't get a pic of.
1679137527279.png


Screenshot 2023-03-18 040920.jpg

Finished product racked.

Screenshot 2023-03-18 041333.jpg
 
  • Like
Reactions: ttvd

bambinone

New Member
Dec 26, 2020
18
21
3
Chicago, Illinois
I did make a 3U case that is 15" deep, and holds 10x 3.5" HDDs via thumb screws slid into capture slots.
Did this ever make it to market? I would buy one just to convert it to a JBOD.

I'd also love to see a 15–17" 3U chassis with two 5.25" bays up front.

I was in the same "backplane or bust" camp but the cabling in the cx4712 with only 10 drives is totally manageable.
I will probably end up going this route but I am waiting for the revision with SAS interposers.
 
Last edited:

voodooflux

New Member
Aug 9, 2023
1
0
1
Did this ever make it to market? I would buy one just to convert it to a JBOD.
This is released as the CX3701 I think. I'm looking to run one as a JBOD too (currently trying to track down a Supermicro power board for this purpose) paired with a main system in a CX4150a or CX3150a.