Suggestions needed for 5 node cluster

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Hi,
I am trying to move my homelab away from BIG servers to smaller units, preferably using non enterprise hardware. (Except for SSD's and 10gbps nic - thanks @i386 for pointing out the conflict :-D )

I want to build a 5 node proxmox cluster with ceph as backing storage.

The last couple of months I have tried to find good enough hardware to satisfy my needs, but I keep finding nothing - either its too expensive for my taste, uses too much power or have too little expansion capability.

I am looking for small units - so I potentially can stack them on top of each other - I love the 1L units, but they are too constrained.

What I need is:
Smallest case possible that can contain the following:

2-4 SATA drives
1 or 2 m.2 SATA/nvme slots - or if that is not possible, 2 extra SATA slots
32GB RAM minimum, 64GB better
Built in NIC
At least 1 pcie slot that can fit a 10gbps network card
Low power consumption
4+ cores - more is better -

What I don't need, but would be nice - hotswap capability.

My budget is around 2000 EUR possibly a little more. This should cover the cost of:
Case
Motherboard+CPU+heatsink/cooler
PSU/powerbrick
Ideally RAM

I did consider just shelving out for 5xSupermicro SC721 - and find a couple of mini-itx motherboards - but I would really like a case just a little smaller, since having 5 of these standing will take up some room anyway.

I am already running a 5 node k8s cluster on dell wyse 5070 - and I have really fallen in love with these "low power" machines - but they are too constrained for a real VM cluster - simply because you cannot expand on storage. But something similar with more storage would be ideal :)

Any ideas is very much welcome.
 
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
are those 2-4 SATA drives to be 2.5" or 3.5"? That plus full PCIe slot all takes room. Would SFF desktops be too big? Perhaps HP EC200a (Xeon D-1518) or similar?
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
are those 2-4 SATA drives to be 2.5" or 3.5"? That plus full PCIe slot all takes room. Would SFF desktops be too big? Perhaps HP EC200a (Xeon D-1518) or similar?
2.5 inch - I expect to use SSD's (enterprise drives)

Perhaps HP EC200a - looks nice - but also looks expensive :) - and it does not seem to have the possibility to add a 10gbps nic - but the expansion case you can add is nice - I will take a look at it. Thanks for suggesting it. Its power usage is almost as I want it - a little lower would be nice - but I could probably live with this :)

SFF Desktops might not be too bad - depending on the size :)
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
For me this contradicts your op :D:
Well - SSD's need to be enterprise - otherwise I need to switch them out too fast is my expectations :)
But you are right - except for the SSD's I hope to use standard consumer stuff if possible - and obviously also for the 10gbps nic - that is almost impossible to get in a consumer version - and even if - 10gbps nics are cheap'ish
 

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
I was looking for several months as well. Similar constellation, only 3 nodes (ProxMox), but Ceph also, for high-availability of VMs.

Should- or must-haves were: Silent, unless a compute job is running. 4+ cores each. 64+ GB RAM each. Working ECC, here I checked the Linux RAS subsystem source code for hours. IPMI. Dual 40+ Gbps because of Ceph. Couple SATA drives. Low power. 3+ PCIe slots because I have a DS4246 with ZFS pools and a TS3100 tape lib, so want dual SAS controllers in one node for pass-through.

Looked at pretty much everything. From Xeon E-2244G on Intel M10JNP2DB in a Fractal Design Define Mini C case, to AsrockRack E3C256D4U-2L2T, to Supermicro X11SCH-LN4F, to Intel Xeon E5-2683v4 SR2JT or Intel Xeon E5-2696V3 SR1XK on Supermicro X10SRi-F in a Fractal Design Define S, to Core i9-12900K with an AsRockRack W680D4U-2L2T or Gigabyte MW34-SP0 board. Then went over into the Xeon D department. To find stuff from 2015 for insane prices, like Supermicro X10SDV with a Cooljag BUF-E cooler. Which in certain cases can't even IDLE properly. A XEON D WOW insert Lewis Black expletive. Briefly touch upon the AMD B550D4-4L board by AsRockRack, but ECC reporting looked fishy and not enough PCIe slots. Everything was Unobtainium, or insane prices, or missing something I really wanted or needed, or didn't have the reliable history I wanted (like sudden-death-Gigabyte).

So I gave up and went big. Going to buy a Rittal VX IT 5311.816 42 HE 800x2000x1200mm 19" rack with perforated doors for passive ventilation. Nodes are 8259CL on EPC621D8A that I have already written too much about. Latest measurement with 6x32 GB RDIMMs and a ConnectX3 at idle is 49 watts for one node, including two 128 GB scrap SATA SSDs for boot RAID-1 and one Micron 5200/5300 MAX 960 GB for Ceph. Will add solar to step out of further electricity price madness here in central Europe.

Finally I couldn't find a single 19" case which had the option to make it fully silent at idle. Or which could fit a Noctua NH-U12S DX-3647. So everything is mounted on Streacom Open Benchtable BC1 V2 and zip-tied onto a 19" shelf, with two back to back with ATX facing to front or back. Planning on more stuff like two lightning detectors (40 km range) which will allow me to switch VDSL2 over to another modem which has half the throughput because it has a lightning protection device in front of it.
 
Last edited:

Y0s

Member
Feb 25, 2021
36
7
8
I don't have an answer - I've been looking with similar constraints (additionally interested in IPMI for remote management). Depends on what you're ready to compromise. There's obviously not a market for small servers. If you want the PCIe slot with 4 SATA you have to start with at least mITX, the embedded server boards will blow your budget.
 

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
I just looked and of course you're right, 2100 EUR per node. The 19" rack with all its gadgets (half openable side-doors etc) is 2500 EUR list price. Redundant networking (two MLAG'd SX6012, four CX3, 56G cables, rail kits) ran 1000 EUR. Storage (DS4246, controllers, cables etc.) without the disks another 650 EUR. When all is said and done, probably 12k EUR. Since I work in the biz and own half the company, bearable. ;-)

Look what table Patrick just tweeted, column "Price": Four Atom cores for crazy 200. And I found 200 for each Seasonic PSU already crazy. But I wanted to buy with 2022's Euros, while still any value is left in the paper.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
Not versed on EU pricing, but in the US, a Supermicro X10SDV-4C-TLN2F can be had for $250 (example). That has integrated 10GbE and meets your other requirements as well. Should fit the budget with an inexpensive ITX case and memory as DDR4 2133 is cheap.
 
  • Like
Reactions: Stephan

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
@Bjorn Smith is it possible that your struggle would be easier if you split your use case into a cluster and a bulk storage solution?

In most cases the storage needs of the cluster itself - clustered redundant storage for running vm images and the "stateful" part of your running services - shouldn't call for that many drives per node. The bulk storage can be split off into something that is reliable and redundant but not HA.

Example: 5 nodes for the "cluster" services + a NAS (which could be a stand-alone NAS or maybe just one node with more storage exposed on NFS/Samba). Cluster nodes would just need a boot drive + one larger/fast/enterprise SSD for Ceph. The bulk/volume storage you put on the NAS.

If you do this then perhaps a cluster of Lenovo 1L boxes with a PCIe slot (for 10gbe card) and 2 drives (1 for boot/os + one for Ceph) would be sufficient for your cluster without burdening them all with the requirements for so much storage.

This would probably meet 90+% of the goals you listed above , though it might be tough within your 2,000 EUR budget even if you shop carefully (5 enterprise class 1-2TB SSDs could eat that whole budget by themselves).
 

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
Supermicro X10SDV-4C-TLN2F can be had for $250
That is very inexpensive, but will still break the 2k limit, because of all the a-la-carte components. I had this board in view, but dropped the idea because I have no use for 10GBit ethernet and board only has one PCIe slot.

When I looked at it three months ago, my concept was as board a X10SDV, a Cooljag BUF-E 60x60x60mm copper cooler, three Noctua NF-A12x15 PWM 120mm, Samsung RDIMM DDR4-2133 in the form of M386A4G40DM0, Seasonic Prime Fanless PX-500 and as case a SilverStone Raven Z RVZ03 ARGB black. Quite the small footprint. One fan would blow directly on the Cooljag and the board to cool everything there. The other two would blow onto a PCIe card and through positive air pressure, force air out of the opening above, through the PSU, to cool that as well.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
That is very inexpensive, but will still break the 2k limit, because of all the a-la-carte components. I had this board in view, but dropped the idea because I have no use for 10GBit ethernet and board only has one PCIe slot.
Maybe for you, but it fits exactly what the OP needs? ITX case/DC power supply isn't exactly expensive and would be all that's required aside from RAM and SSDs (not part of said budget).
 
Last edited:

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
I admit I wasn't looking too long, but how about Lenovo M75s Gen2:

Smallest case possible that can contain the following: - 93 x 298 x 340 mm (3.6 x 11.7 x 13.4 inches)
2-4 SATA drives - 2 SATA drives.
1 or 2 m.2 SATA/nvme slots - or if that is not possible, 2 extra SATA slots - plus 1x m2 pcie v3 x4 (total 3 drives)
32GB RAM minimum, 64GB better - up to 128gb
Built in NIC - check
At least 1 pcie slot that can fit a 10gbps network card - check - One PCIe 3.0 x1, low-profie (length < 155mm, height < 68mm) • One PCIe 3.0 x16, low-profie (length < 155mm, height < 68mm)
Low power consumption - AMDs are pretty good about power usage.
4+ cores - more is better - supports CPUs up to 12C Ryzen 9 Pro 3900

the only problem will be your budget, I can't see you buying 5 of these under 2k EU for the next few years.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
@Bjorn Smith is it possible that your struggle would be easier if you split your use case into a cluster and a bulk storage solution?
This is what I had - and it worked nicely - but every time I had to reboot the storage node I had to shut down all my VM's - and I would like a solution where everything is HA - which is why ceph seems to be a nice fit.

Not versed on EU pricing, but in the US, a Supermicro X10SDV-4C-TLN2F can be had for $250 (example). That has integrated 10GbE and meets your other requirements as well. Should fit the budget with an inexpensive ITX case and memory as DDR4 2133 is cheap.
Thanks for that link - I have mixed experiences with these boards - but that is certainly an option - although I would prefer a bit newer tech - and in the EU these boards go for almost double at least - so if I had to buy from your link - the 250 would possibly become 350 each - and also the idle power consumption of these boards are not really that low - but it might be worth considering.

I admit I wasn't looking too long, but how about Lenovo M75s Gen2:
the only problem will be your budget, I can't see you buying 5 of these under 2k EU for the next few years.
Yeah - If I had all the money in the world it would be easy enough to spec out something small'ish and perfect - but thanks - at least I can take a look at similar systems.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
This is what I had - and it worked nicely - but every time I had to reboot the storage node I had to shut down all my VM's - and I would like a solution where everything is HA - which is why ceph seems to be a fit.
Ceph is a nice fit. But trying use Ceph for everything is what gets to your demanding specs for each node. If you use Ceph only for tbe things you need to keep the VMs running (or at least most od them) you'll probably get much lighter requirements on each node and still have HA. Maybe your Plex server ends up dependent on the NAS but everything else survives (and actually the plex server would survive but lose access to the local library - you could still stream external sources).
 
  • Like
Reactions: Sean Ho

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Ceph is a nice fit. But trying use Ceph for everything is what gets to your demanding specs for each node. If you use Ceph only for tbe things you need to keep the VMs running (or at least most od them) you'll probably get much lighter requirements on each node and still have HA. Maybe your Plex server ends up dependent on the NAS but everything else survives (and actually the plex server would survive but lose access to the local library - you could still stream external sources).
I guess I could make a small 5 node "tiny" cluster of ceph nodes - only for the important VM's - and then the rest could live on the NAS - But then I would suddenly have a lot of machines - I would like to cut it down to the 5 required for ceph/vm's + my already 5 node k8s cluster - which I then could "upgrade" to use ceph storage as well - so that would also benefit from that.
 

a5tra3a

New Member
Jun 11, 2022
7
1
3
Calgary, Alberta, Canada
I am working on converting my existing setup to include Ceph and will be using 7 nodes with 1 x 250GB SSD for the hypervisor OS and another 1 x 250GB SSD for Ceph storage. I will only put essential VMs (Firewall, Reverse Proxy, Mail Gateway and 1 more VM that runs some core docker services (Unifi Controller, Portainer, TrueCommand, CUPs, etc.)) on the Ceph storage and the rest of my VMs will live on my TrueNAS converted QNAP TS-451s. Though my nodes are very old (5 x DL380 G5 & 1 x PE2950 G3 & 1 x Supermicro built system) and power-hungry at the moment and I am looking to replace them with something much more modern. I have determined for my requirements that the physical size of the node is not the problem (I have 2 x 42u racks and 1 x 45u rack) what is important is physical expandability for PCI-e and SATA as well as heat management and power consumption. I plan to replace the 7 nodes I have with custom build systems using consumer hardware into a 4u chassis and make use of 120mm fans and possibly liquid-cooled CPU AIO coolers.