Malina: 6x Raspberry Pi CM4 compute cluster

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

niekbergboer

Active Member
Jun 21, 2016
154
59
28
46
Switzerland
Build’s Name: Malina (meaning "Raspberry" in various Slavic languages)
Operating System/ Storage Platform: Plain Debian 11 plus LXD and Ceph
CPU: 6x ARM Cortex-A72
Motherboard: DeskPi Super6C
Chassis: SuperMicro SuperChassis 510T-203B
Drives: 6x WD Red SN700 (1000 GB, M.2 2280)
RAM: 6x 8GB
Add-in Cards: n/a
Power Supply: 100W DC (came with the motherboard)

Usage Profile: Low-power European Energy-Crisis-Winter homelab and basic services for the home.

"Winter Is Coming" in Europe, and with the geopolitical situation being as it is, I wanted to make sure that I could run a very basic set of homelab services on a low-power architecture such that I can have my APC surt1000xli bridge a rolling blackout, should those occur.

Encouraged by, among others, Jeff Geerling, I ordered the DeskPi Super6C, and then I paid what felt like 3 limbs, a firstborn, and a fortune, to acquire 6x Raspberry Pi CM4 8GB modules on eBay.

I was used to having Proxmox VE on my normal Intel cluster, but it was not to be: Pimox7 does exist, but it is not in a state suitable for what I expect. I had my third attempt, and then after a lot of toil my third surrender regarding Kubernetes, and so I went OG: Plain Debian 11 with Ceph RBD and CephFS, with LXD in multi-master mode. This actually works surprisingly well: I run my various small services (IPv6 routing, CA, caches), as well as a full-blown ARM64 VM (KVM, through LXD) running Debian 11 with ZFS as a fileserver.

The main bottleneck in this little cluster is bandwidth: The CM4s have just 1 lane of PCIe 2.0, and the motherboard has a GBe switch on it. I have noticed that running my full-blown InfluxDb setup on it leaves the nodes, containers and VMs fighting with Ceph over the bandwidth, and it all ends in tears. I therefore moved that service to a server that I have running in a community-based colo in Zurich.

I did not do power measurements yet, but I will at some point. At any rate, it will be lower than the 90W-a-pop idle power of my Intel cluster.

out.jpg
Malina sitting in its chassis.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Looks nice - I also considered this, but gave up because of the low bandwidth between the pi's.

If they had built the motherboard with e.g. 10gbps backbone or higher, then it might have worked decently - but as soon as you require i/o its not really great as you found out.

But its a nice start of something - the next version might be more usable for more things.
 

niekbergboer

Active Member
Jun 21, 2016
154
59
28
46
Switzerland
That would indeed be interesting: PCIe 2.0 x1 is 500 Mbyte/s full duplex, so a 2.5 Gbe switch would be a feasible improvement. 5 Gbe would be pushing it, and 10 Gbe is more than the link can saturate.

Also, in my case, Ceph is not very friendly on these small CPUs.

Interestingly, in contrast to my Intel cluster, RAM is not the first bottleneck in my current situation: over half of it sits idle or is used as block cache.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Love the DeskPi cluster board! I’ve got the Super6c mounted in a 3d printed case with a laser-cut acrylic lid so I can watch all the fun flashy lights. Running Kubernetes (k3s) with Longhorn for storage. Just running a few test apps for now but planning to move most of the things that run all the time (home assistant, etc.). Longhorn seems to work with the limited network bandwidth a bit better than Ceph because it at least tries to make sure you have an active replica on the same host as the pod claiming the storage - though by no means is it anything close to “fast”. It is quite functional though.

I got lucky enough to get the CM4s at/near MSRP by watching for stock on rpilocator, though it requires a lot of patience - a couple of months in my case.

WD Blue 500Gb drives on each pi, Waveshare heatsinks, cheap m.2 heatsinks on the NVMe drives and 3x Noctua 40mm. I build the case so that 30-40% of the airflow pulls across the bottom so that the NVMe drives won't get hot (probably overkill). Its also exactly 1U high so if you really wanted to you could slide it into a rack.

20221108_134537.jpg
(didn't really notice how much dust its collected pulling air through the front until I uploaded this pic...ugh!).