server hardware for private use ceph cluster

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

storageNator

New Member
Aug 6, 2023
11
0
1
Hello,


I am looking for hardware for a Ceph cluster for private use.
Initially, this should serve for learning and if it runs well, then this should be a stage of my backup concept.

The servers should consume as little power as possible (I'm aiming for max 50W at idle) and should also not be too expensive, I have set myself a budget of 200-300€ per server without storage devices.
Currently I have 4x10TB SAS HDDs, so it would be good if the server had SAS ports, or I could add some with an expansion card.

Does anybody have a recommendation for suitable server hardware?

What do you think about ARM as server hardware?
I am aware that this could be a bit of a bottleneck along with ceph, on the other hand for HDD performance it should probably be enough?
 

oneplane

Well-Known Member
Jul 23, 2021
846
484
63
If you don't mind bad performance it's possible, you can get 5 NUCs (i5, 8GB to 16GB RAM depending on age) and 5 SAS-to-USB controllers, but other than that I'm not sure how you'd have reasonably healthy hardware for Ceph, especially at that 50W baseline.

Even getting an old R720 would get you a baseline way over 110W (after serious limits and tweaks - normally sits around 220W) and you'd still need 10Gbps NICs and a switch after that.

If you take the SAS HDDs, what do they consume when idle, and what if they are spun down? You'd have to subtract that from your power budget, and then take things like RAM, BMC, NIC, CPU idle power etc. which makes it really hard to get server hardware for that power budget. With SSDs that might be easier (due to the lack of mechanical actuation). As for the SAS HDDs you already have, do you plan on buying more of the same, or is the plan to have 4 OSDs per node?
 

zunder1990

Active Member
Nov 15, 2012
212
72
28
A few things about CEPH,
  • it LOVES ram plan on 4gb of ram per disk.
  • Next 3 host is the bare minimum number of hosts, the more the better.
  • You will suffer if you have less than 10gb network
I think you will have a hard time finding a arm based board what will meet the above requirements.
 

storageNator

New Member
Aug 6, 2023
11
0
1
Hi,
thank you for your answers.

I forgot to mention that it should not be 4 hdds per server, but that I have a total of 4 hdds.

Yes I am aware that I must have at least 3 servers.
For the start it is planned to use one hdd and 1-2 nvmes per server.
If needed I will add one more HDD, that should be enough.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Apart from the SAS requirement, SFF (not uSFF) cluster should fit the bill for a learning/backup cluster. 1x3.5", some additionally 1x2.5", 1x2280 M-key, 1x2230 A+E-key, and some add a second 2280 M-key. Most will have an x16 and an x1, one of which you could use for a 10GbE NIC. 7th-gen (e.g., m710s, optiplex 5050) are super cheap nowadays, and still pretty good on idle power.
 

storageNator

New Member
Aug 6, 2023
11
0
1
Hi @Sean Ho and all others.
Sorry for the very late reply.
I've had a lot of stress over the last few months

But now the stress has been significantly reduced and I finally want to implement my backup concept.

I have also thought about my requirements again and will probably not use Ceph for the time being as I don't need the high availability.
Instead, I will probably build a smaller system to which the clients synchronize the data directly and then create a backup of the data on this server to another server, which then has the SAS hard drives installed.
But maybe one day I will expand / change this to a ceph cluster

I'll probably start the second server with the HDDs once a day or something like that, so power consumption won't be quite so important.
But it would be good to get it as quiet as possible, then I can leave it in a room that is occasionally inhabited.
Which computer / server would you recommend?
 
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
A single NAS is certainly simpler than a whole storage cluster. Noise tolerance is very subjective, and spinners necessarily make noise -- not just the spindle, but the drive heads, and it's sporadic, so not exactly white noise. If noise sensitivity is high, your storage needs are not large, and budget permits, you might consider an all-flash ZFS pool, e.g., using TrueNAS (Core for stability, Scale for the convenience of Linux under the hood).

Once you've decided on type and number of drives (spinners or flash; 3.5/2.5/m.2/U.2), that informs chassis selection (4U, 2U, big tower like Enthoo Pro 2, or small tower), which in turn informs choice of motherboard and cooler. mATX 1151-2 (8th/9th gen) consumer boards are very affordable now, power efficient, and can use iGPU so you don't need to use a PCIe slot for discrete GPU. mATX is as small as I'd go; I feel ITX is a bit too limiting. You will probably eventually want both a SAS HBA and 10/25/40GbE NIC.
 
  • Like
Reactions: T_Minus

storageNator

New Member
Aug 6, 2023
11
0
1
Hi,
thanks for the answer again.

Yes, I'll think about whether it's TrueNAS, something else, or just creating a ZFS pool manually.
In the last case I will probably use Debian 12.

What I still have to think about is the most efficient way to transfer the data from the primary server to the backup server.
The primary server will probably be a Proxmox / Debian 12 server

I know that the noise from HDDs can only be reduced to a limited extent, I was more concerned with the CPU.
I still have a few ATX cases left over, I could probably use them for this.
I don't want to use all-flash for the time being, as I receive the decommissioned HDDs free of charge from my employer.
But to speed up the ZFS pool, I am certainly open to buying an SSD as a special device

Edit:
I get 3.5 inch SAS HDDs
 
Last edited:

Oliver Mack

New Member
Sep 25, 2014
23
0
1
49
Mini clusters are hip at the moment, but isn't it easier to take an Epyc Naples/Rome or Skylake XEON with 256 GB RAM and a few SSDs and virtualize the Ceph cluster?
In this way no 10/25Gb switch is needed and an X11SPL-F/Xeon 6150 with 256 RAM costs 600-700€ on Ebay perhaps?
And the Xeon/Epyc should be more stable than a China MiniPC Cluster and in terms of power consumption, one server versus 4 mini PCs with 10Gb switch?
 

storageNator

New Member
Aug 6, 2023
11
0
1
And what advantage do I have with this virtualized Ceph cluster with only one server?
I have more of a problem with this when it comes to reliability.


Sure, sounds like a reasonable plan. Remember to mirror any special vdevs; if metadata dies, the zfs dies with it.
Thanks for the hint!
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
I can see how Ceph virtualised on a single physical host would be an option for learning and proof-of-concept; you can use small loopback block devs for the OSDs and play around with recovery.

But if a single NAS is fine for the end goal of home backup, then there's no reason to look beyond ZFS.
 

oneplane

Well-Known Member
Jul 23, 2021
846
484
63
If you want a NAS, and one that 'just works': use TrueNAS. You can also replicate to a secondary node if the data is that critical, but for reliability (uptime) more nodes doesn't always mean more uptime, because there are additional things that can break. An active-passive replicated setup is the simplest 'extra' reliable setup if you don't mind changing IPs or FQDNs when you need to switch to the other node.

If a few days downtime isn't that bad you can also just use 1 node, copy backups elsewhere (i.e. Backblaze) and then replace the hardware if something breaks (which will depend on how fast you can get parts ordered and delivered :D )
 

storageNator

New Member
Aug 6, 2023
11
0
1
This will be a backup server, so yes, a few days of downtime would be acceptable.

Which SAS PCIe card is actually recommended?
(If relevant for this question, this card is to be used on a consumer mainboard with an Intel i750 CPU)
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Just about any one of the popular LSI HBAs that can be flashed to IT mode, and the myriad OEM versions, using SAS2008, 2208, 2308, 3008, 3108 (no IT, but JBOD) 3408, etc. Also Adaptec ASR 7-series (SAS2) and 8-series (SAS3). SAS2 as low as $9 (ASR-7805) and SAS3 as low as $17 (Inspur 3008).