Ceph-in-a-box (another take on Mini Cluster in a Box) WIP

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.


Dec 4, 2018
Hi guys,

I'm working on a little project and though you guys might find it interesting.
Not sure if this is the right place but it seemed to be the most fitting in "DIY Server and Workstation Builds".

The idea of this project is to quiet down my ceph home lab and create a almost all-in-one solution.
I started out with 3 1u servers (which aren't silent) and put all of these machines in one mini-itx case.

Build’s Name: Ceph-in-a-box
Operating System/ Storage Platform: Proxmox-VE 6.3
CPU: 3x Intel Xeon d1521 (integrated)
Motherboard: 3x Supermicro X10SDV-4C-TLN2F
Chassis: Lian Li PC-Q33
Drives: 3x Micron 1100 + 12x Kingston V300 60gb
RAM: 6x 8gb ecc registered ddr4 (will upgrade down the road)
Add-in Cards: -
Power Supply: Corsair CX750
Other Bits: Noctua NF-A14

Usage Profile: Mostly testing new Proxmox-VE updates/fixes with multiple storage backends like ceph/zfs/lvm/nfs.

The server build is still a work in progress, but let's start with the beginning.

It started out with the idea to stack multiple mini-itx motherboards together like Mini Cluster in a Box setup.
Instead of the Bitfenix case I opted to go for a Lian-Li PC-Q33 since it seemed to be easier to mount my ssd's.
No pictures of the planning fase, but you're not missing out on anything since that was a pretty messy stage without a lot of interesting shots.

The first fitment of a couple of motherboards after cutting the back of the case:

At first I planned to use a 150W 12v LED power supply (I later found out 150W wasn't enough when powering the ssd's as well).
Here's my janky setup to get a first boot:

Next up was mounting the ssd's and cooling.
The ssd's were pretty easy to mount because I already made some brackets to stack 4 ssd's in my 1U case.

I have some experience with creating my own custom cables so naturally I had to make my own.
The ssd's are powered using the output on the motherboards, they can boot on 12v only and convert it to the other necessary voltages.


It was at this moment I found out that the 150W was not sufficient for this setup
The bottom fan was planned to cool the power supply and some miscellaneous stuff, since that wasn't needed anymore I dropped it from the setup.
A little bracket was made to attach a 140mm fan to the old pci-e slots.


The 4x SATA cables got delivered and neatly routed in the case like the power cables:


My plan worked out beautifully, the cables won't kink when I close the case.


The new power source for this project is a salvaged Corsair CX750 I had lying around, this seemed like a better option than buying a bigger 12v power supply (and certainly a lot cheaper).
Some modding was needed as the 24pin ATX cable wasn't looking good, so removing it completely was my solution as I didn't need it anyways.
I first opted to remove the 3.3v and 5v rails entirely but that required me to override the WT7502 chip, this chip provides overvoltage and undervoltage protection and will not start the power supply when it doesn't see 3.3v and 5v.
Because disabling the OV/UV protection didn't seem like a good idea I scrapped that plan.
24pin ATX removed:

Also a fan mod was definitely needed as Corsair fan made too much (rattle) noise in my opinion.
Unfortunately this psu starts out with 5v output voltage for the fan which was not enough to start up a 140mm Noctua fan.
Instead I used a NZXT fan I had and made a little adapter to accommodate for the 3pin fan header.




Back to creating more custom cables and tidy up the ssd power cables with some sleeving.

As you'll be able to see I switched the BeQuiet 140mm fan to a Noctua NF-A14.
The fan proved to be insufficient for cooling the cpu's during a stress test.
Especially the top motherboard was struggling, probably because the bottom two have a nice windtunnel and the top one doesn't (case was closed during testing).
By using a small 60mm fan I was able to turn down the big fan to 800rpm making the machine a lot quieter.
I ordered a couple of 60x10mm fans as the 60x25mm doesn't leave a lot of breathing room on the bottom two motherboards.

This is basically where the project is at this point in time.
A couple of pictures to show the server in it's closed up state:
20201226_225705-resized.jpg 20201226_225734-resized.jpg 20201226_230209-resized.jpg

On the to-do list:
- Power switches (these are on their way so I have to wait for them to arrive).
- Fan control (I want to spin the fan when one or more servers are running and not just plug it into one motherboard)
- Finishing up the case and probably close off the gaps left from the old 120mm fan hole in the back.
- Maybe internal 10GbE using the MikroTik CRS305-1G-4S+IN (Not sure as this will get quite expensive due to the sfp+ to rj45 I'll need for this setup).

For fan control I was planning to use an Arduino, with this one I'll pick up the pwm signal from all three motherboards and send the highest value to the fan.
The other option is to grab the signal from the power led and statically set the pwm signal to the Noctua fan.

I hope you like my build so far, if you have any suggestions please let me know!
Last edited:


Well-Known Member
Jan 6, 2016
Looks great !
I am way too lazy to make something so neat.
my testing environment (ceph etc, similar to your use case) where performance doesn’t matter is 5 x HPE EC200a’s, not nearly as elegant as your single box solution for sure.
  • Like
Reactions: mbosma


Dec 4, 2018
So I had to follow this up. though it's still not completely done, there have been a bunch of improvements.

To start off with this one I created a simple script on an arduino to spin the 140mm fan when one or more servers are turned on.

After testing out everything on the breadboard I created a little pcb with some connectors to hook everything up.

The power buttons came in the mail so they were installed in the front panel.
This took some measuring and fitting because the holes are installed in the holes where the original power button and usb ports used to be.

Another small package came in the mail with a totally unnecessary addition, but since this whole build is getting a bit more serious than I intended I really wanted to tick all the boxes from the to-do list.


It took a bit of elbow grease and lot of patience but I'm happy with how the holes for the switch turned out.
The fan mount had to be altered a bit and because the rj45 sfp+ modules become pretty hot I also added a small 100mm fan for the switch.

Of course everything had to be buttoned up with some custom cables.
Here's the result so far.

On the to-do list is filling up the holes on the front where the audio jacks used to be and I still need to fill up the holes on the back of the case.
I'm still deciding wether I'm filling front holes up with led's or just fill them up with scrap aluminium or sticker of some sort.
For the back holes I need to get some more aluminium as I don't have a proper piece to fit over the big 120mm fan hole.

The system is fully functional, however I can't use the switch yet because the SFP+ modules are still on their way.

I hope you enjoy watching this build, if you have any suggestions please let me know!
PS: There is still a "lot" of space left at the top of the server above the top motherboard, is there anything I can put on top of the server?


Apr 17, 2017
Looks great !
I am way too lazy to make something so neat.
my testing environment (ceph etc, similar to your use case) where performance doesn’t matter is 5 x HPE EC200a’s, not nearly as elegant as your single box solution for sure.
I was considering playing with Proxmox/Ceph using the EC200a's, would you still recommend them?


Well-Known Member
Jan 6, 2016
I was considering playing with Proxmox/Ceph using the EC200a's, would you still recommend them?
Well I would have never recommended them as such but for my situation it was just easy to make a functional test, I had the memory and all the drives etc so combined that with a silly cheap price it makes a functional (Not fast) lab. I mean I could have easily spend $600 on a single machine, instead that was a 5 machine cluster :)
iLo etc for remote power on and off etc also makes it nice.