From thought to hardware: How do you did it?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ralph_IT

I'm called Ralph
Apr 12, 2021
182
98
28
47
/home
Hi,

I've been reading a lot of threads, blogs and posts about how to build your own home lab, specially and specifically the network, which seems to be the stepping stone.
One thing that I miss from all of them is the order of choosing components. I mean, there is a detailed list of all the hardware and the justification of why it was chosen, but never saw an explanation of the order and why.

So, for all of you that have the homelab build up/planned:
Which was the starting point?
Did you planned it from the outside to the inside (like ISP conn -> router -> switch...whatever), or did you build it around a central piece (like building all around a main server/nas/filer)?
Did you have an self imposed restriction, aside from budget constraints?

Thanks for your time and comments.

Edit: Typo
 
Last edited:

TLN

Active Member
Feb 26, 2016
523
84
28
34
Living in apartment, so everything must be small/silent. Now moved to big house, but still following that idea.
Have 10G network available for servers/etc, using fanless 10g/poe switches.
I'd prefer to keep it silent and small, until I get my own house and place rack (or two there)
 

nabsltd

Well-Known Member
Jan 26, 2022
453
309
63
So, for all of you that have the homelab build up/planned:
Which was the starting point?
For me, it was a move to a new house, which had space in the basement to build it out like a (cheap) datacenter. And, it was the "non-computer" parts that were first on my mind.

I had circuits added where I wanted to place the rack, and bought a 42U rack that could be attached to the concrete floor. Then, I focused on chassis. I knew I wanted 2x 4U with 24x drives for storage, plus 3x 2U chassis for compute (because you need at least 3x for a quorum for most clusters). I went with 2U over 1U because you have a lot more options for hardware inside, plus they are generally quieter due to the larger fans.

I still think chassis choice is the starting point, because a flexible, well-built chassis will last far beyond the first components that you install in it. And, a mistake with the chassis can limit the component choices. Using the same thought process, newer mini/micro systems that are being used as building blocks for home datacenters need to have long-term expansion (i.e., external ports and available internal upgrades) as a key factor in deciding which one to buy. The CPU is often the least important thing about those smaller boxes, if the RAM and internal storage can be upgraded, and if there are enough external ports to keep it running longer.
 

louie1961

Active Member
May 15, 2023
171
71
28
To be honest, I am not sure the path I took was the most efficient. First I started messing around with the cloud, and hosting WordPress and NextCloud on AWS. Next I built a small NAS using a raspberry pi and openmediavault. Then I tried to set up VLANs on my existing Netgear router by downloading and installing OpenWRT. After that I bought a used HP Z640 workstation with a E5-2690V3 CPU and 128 GB of ram that I found for around $300. I really dug into Proxmox at that point as well as docker, docker compose, and virtualizing a bunch of stuff, including pfSense. That led me to a separate pfSense device, and managed switches. Somewhere along the way I found an inexpensive used Synology 2 bay that I added to the mix. I ran that set up for almost a year, before I got the bug to try and downsize everything and reduce electrical power consumption. Now I have three Proxmox nodes (The Z640 is a development/sandbox unit only now and gets turned on infrequently) The other two are a HP Elitedesk Mini with an i5-12500T CPU and a NUC-like device with an N-100 CPU. I still run a separate pfSense device (also N100 now) as well as two different NAS machines. My whole setup including 2 cable modems (I have redundant internet connections) excluding the Z640 box, runs at about 85-90 watts at the wall. It works very well for my needs, so no rack mount equipment drawing hundreds of watts for me.
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,656
2,068
113
I think this is usually different for everyone... some need low power, some need low price, some need to re-use what they have, some need 100Gbit and very high-end hardware to simulate their work environment non-virutalized... others will virtualise and learn\test...
 
  • Like
Reactions: Tech Junky

pricklypunter

Well-Known Member
Nov 10, 2015
1,714
521
113
Canada
I guess I started with how much storage I was likely to need, then doubled that figure. I also had a fairly good idea of what I would be doing with it, besides the general home network type stuff, so picked components accordingly. To that end, I looked for a good quality chassis that I knew could handle my requirements. I built out an AIO into said chassis, with future upgrades in mind, so a flexible mainboard plus required cards, quality disks and a decent power supply. There was sufficient wiring already in the house, that I re-purposed, and I just threw a managed POE switch on it in the basement, configured to provide a collapsed backbone. Throw in the cable co gateway in bridge mode and I was pretty much a happy camper. Obviously since then, I've been tinkering with things as I try out new stuff, and I have performed several upgrades, but the basic building blocks haven't changed much :)
 

zachj

Active Member
Apr 17, 2019
161
106
43
You learn from mistakes.

in other words: the more you plan your lab up front the less you learn.

in other words: it’s better to have no plan at all, start buying stuff and figure out how best to make it work until it no longer meets your needs. Rinse. Repeat.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,656
2,068
113
You learn from mistakes.

in other words: the more you plan your lab up front the less you learn.

in other words: it’s better to have no plan at all, start buying stuff and figure out how best to make it work until it no longer meets your needs. Rinse. Repeat.
NAILED IT! :cool:
 

zachj

Active Member
Apr 17, 2019
161
106
43
On a more serious note if you aren’t made of money you should indeed figure out what you KNOW you need so you don’t waste money on stuff you know won’t meet those needs.

for example if you know you need nvme support or sriov support then make sure you buy hardware that’s got it.

realistically most people can survive with a single 4tb sata ssd and a 4-core cpu and anywhere from 32-128gb of ram—all things that can be satisfied with most any consumer platform that was made in 2017 or newer.
 

louie1961

Active Member
May 15, 2023
171
71
28
If I was starting over and future me could talk to past me, I would go with a Synology NAS, a pfSense firewall appliance, a cheap 2.5gbe managed switch and project mini/micro 1 liter PC of your choosing. You can do a whole lot with those four pieces of hardware.
 

i386

Well-Known Member
Mar 18, 2016
4,263
1,558
113
34
Germany
homelab or home infrastructure?

I used to mix both but I learned that it's not a good idea to experiment on critical stuff (eg messing up a virtual network and killing the internet
access behind the virtual router for hours).

My home infrastructure is based on "needs", so no real budget constraint. (Like fixing your car when it's unsafe or broken)

My homelab is based on what I want to learn and what I can get as cheap as possible to learn/try it. Some purchases are made based on curiosity (eg cx-2 nics from the "dirch cheap 10gbe" network thread a few years ago or nvram ssds like the flashtec or rms-200 here in the forums).
This is budgeted (~200€ per month), but I rarely use it because I can try a lot of stuff at work and don't have to do it at home anymore :D
 

mattventura

Active Member
Nov 9, 2022
449
219
43
I try to buy things that are general enough that I will always have a use for them, and won't go obsolete quickly. Examples would include a good chassis (e.g. an ancient SC826 or 847 can be modernized with a new backplane). I make sure that the hardware I buy is good for virtualization, so that workloads can be de-coupled from the underlying hardware.

I try to avoid using one box for everything, because you want to be able to take down individual hardware for tinkering without impacting the rest of the environment. For example, you don't want your internet to go down because you wanted to restart your storage server.

If I had to start from scratch, I'd probably do something like this:
  • One box to handle routing, and virtualizing network things like the WAP controller
  • Dedicated switches and WAPs
  • Two primary general workload boxes (VMs and k8s)
  • One box for workloads that aren't as portable (e.g. things that need a GPU passed through, or bulk local storage)
The reason for having three workload boxes is because a lot of HA things (k8s control plane, Ceph monitor nodes, and so on) require a quorum to operate. That is, you need three nodes to be able to tolerate a single node failure). This allows you to have HA distributed storage and container orchestration. For some HA workloads, such as VMs, while you could do it with two nodes, that means you need nodes that can handle the workload of two nodes, as opposed to 1.5 nodes if you had a 3-node cluster.
 

louie1961

Active Member
May 15, 2023
171
71
28
homelab or home infrastructure?
In my case a bit of both. The pfsense box, switch and wireless access point are mostly home infrastructure. However, if I wasn't doing home lab/self hosted stuff, I wouldn't need a managed switch or VLANs. I wouldn't need 2.5gb ethernet either. BUT I do have redundant internet connectivity because of working at home. If my cable internet goes down I revert to T-Mobile home internet. The same is true for those times that I mess something up in my normal home infrastructure. My wife and daughter know to revert to the T-mobile internet which has its own SSID and wifi set up