The Ultimate Off-grid SOHO Server Rack (MegaThread)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Levi

Member
Mar 2, 2015
76
5
8
34
Preface

This build won't take place until late 2022 or beyond. I still have to buy a property and build the workshop before I can officially get started. I'm starting this thread and others to better understand which setups and technologies will work best for my requirements. It's been almost 7 years since I worked in a large data center and even then I was mostly in the NOC. So I don't have all the answers but I'm confident the STH community can provide some valuable insights.

Priorities
  1. Low power - If I can keep the entire rack under 600 watts, then it's likely I can run it on solar only. Anything past 600w will probably require an on-site diesel generator that kicks on and charges up the batteries if they are low. Something like 30kw hours of battery. Probably like a 4 or 5k generator.
  2. Highly available - To be honest this is about the same priority as low power. I would like no single point of failure in the rack. That means 2 WANs, dual routers with VIP, dual switches, Hyperconverged servers using something like Hyper-V failover cluster, etc.
  3. Price - I'm not rich, so I will lean toward consumer (prosumer?) gear, but if it matches the other priorities, I will splurge if I have to.

Sub-threads

I will be using this thread for the actual build. I will be making smaller threads to discuss specific technologies, software and hardware that I will use in the build.
If you have thoughts on something that doesn't have a thread, please DM me or just post here in the MegaThread.

Thanks
 
Last edited:
  • Like
Reactions: itronin

kapone

Well-Known Member
May 23, 2015
1,095
642
113
*subscribing* - Interesting thread. :)

That said, the top burning question in my mind is:

Highly available - To be honest this is about the same priority as low power. I would like no single point of failure in the rack. That means 2 WANs, dual routers with VIP, dual switches, Hyperconverged servers using something like Hyper-V failover cluster, etc.
Why? Just for shits and giggles or is there an actual need?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
E3-1220L V2 w\S3500 mirror SSD + pFsense = around 17w power each. Cheap and easy. You can get entire chassis\mobo\CPU for $100-200 ready to run, just add SSD. Even less power but more up front $ is the mini pfSense appliance that are like 5"x5" but cost $200-$300 depending on specs, they are lower than 17w and smaller than 1U. You could do this with an Intel Atom but by the time you build that out you're talking even more $$ but they're lower spec, and more powerful.

For switch do you really need HA? Or could you just load the same config on a 2nd, keep it powered down unless you need it then power it on? This is what I do at my home for router and switch, then you save power all the time and can swap if needed.

For Server\Hypervisor it's hard to beat the AMD right now for power\performance ratio. 5600x or you can save $ with previous gen. ASROCK Rack motherboard is the big $ item here, and the RAM isn't cheap, but neither is other UDIMM (I guess RDIMM isn't a deal right now either June\2021). Another option E3 v3 will cost MUCH LESS to build, we're talking $150 for CPU+Motherboard and maybe 4 or 8GB if you can find a deal. Less performance, but if you don't have high demand needs you could build 4x of these for nearly the same as one 5600x. If you're going to load them with 32GB (or more in the AMD) well RAM cost is almost the same right now so you're spending that either way.

If you need >RAM you're going to use more power. If you need >Cores you're going to use more power no matter which system you go to and if you need >CORES and >RAM then of course even more power ;)

We are "off grid" power for weeks at a time in winter for over the last 10 years, and I run my entire house on Generator during that time including all networking, cameras, office PC + house stuff (fridges and freezers) and we use around 2G of gas per-day. There's absolutely no way you can do this on diesel it isn't affordable, while a diesel generator will last much longer than a gas you're going to spend way way more than replacing that gas generator on diesel operating cost, for that I would spend the $$ on more solar (and I have). I have 1000s of hours on my original Honda EU2000i and it starts up first pull every time, my second one starts up on the second pull. Solar is where the $$ should go not a $4000 or $8000 diesel generator IMO get a used\deal gas honda 3000i or 6500 if you need 220. If $ was no object then yes, I would have a top of the line diesel backup and 5000G tank don't get me wrong, but at this stage I would (and am) starting to build my solar system. I've run my Honda Eu2000i for 100hrs straight only stopping to put fuel in to keep animals alive in snow storm providing them heat, I really am impressed what these can do and how they hold up.
 
Last edited:

Levi

Member
Mar 2, 2015
76
5
8
34
*subscribing* - Interesting thread. :)

That said, the top burning question in my mind is:


Why? Just for shits and giggles or is there an actual need?
I knew this would be one of the first things asked haha. I will try and add a longer motivation section to the original post.

So I guess it depends on the person. I happen to have been traveling for work the last 7 years or so. Currently stuck in India waiting on flights to open back up. I'm kinda ready for a home base of operations. I know I will be cutting back on traveling but I still expect to be gone about 6 months of the year.

Now I'm not 100% certain I will be off-grid since I haven't bought the property or anything but most of the places I have looked at have been around 20 acre lots with no utils. So I was just thinking about what all I would want at my base camp. Some external cameras. Smart home? Fire/smoke detectors and other things. Then I was like well maybe Plex? Pi-hole? What about backups for my PC? Some of these things I could push to the cloud if I wanted but I'm already spending nearly $1200 a year on kubernetes and other personal stuff in the cloud that I could eliminate.

So that was just more backstory. Me trying to not answer the question lol.

is there an actual need
If I lose access to my home network I don't expect the world to end or anyone to get hurt. I just know I will sleep better if I'm not getting any Pingdom alerts and I know I have access to what's going on at the house. So as soon as I went down that road I was like okay.... 2x WAN, 2x routers, 2x switches. So let's say connectivity is a strong need. I also work from home when I'm not traveling so I really can't be missing work.

The Hyperconvered home lab is more of a shits and giggles thing. I'm not even sure if it will make it into the build completely but I want to try. When I worked at the data center they gave us a rack to play with. I ended up building this failover cluster using 2 compute servers and 2 servers setup with star wind vSan. It was probably the coolest thing I ever played with. I would host gaming and team speak servers for friends and I could move VM's from one of the cluster members to the other to do windows updates and hardly anyone would notice. So it suits the theme of HA and I think it will be the most interesting and challenging part to build and stay below that 600-800watts threshold.
 
  • Like
Reactions: Amrhn

Levi

Member
Mar 2, 2015
76
5
8
34
E3-1220L V2 w\S3500 mirror SSD + pFsense = around 17w power each. Cheap and easy. You can get entire chassis\mobo\CPU for $100-200 ready to run, just add SSD. Even less power but more up front $ is the mini pfSense appliance that are like 5"x5" but cost $200-$300 depending on specs, they are lower than 17w and smaller than 1U. You could do this with an Intel Atom but by the time you build that out you're talking even more $$ but they're lower spec, and more powerful.
I will cross-post your suggestions to the Low Power / HA - Router options. It seems like the HA router setup might be easier than I expected and I was already given some pretty decent advice I completely overlooked like 'Why not just use a normal WAP and flash DDRT?" Idk why I didn't think of that. I flashed at least 2 or 3 WRT-54G's in my day.

I think to hit the final goal I will need to pinch every watt. So that's why I like the 2.5 watt pi router. I'm also not sure what I need to run on the routers and how much CPU that will consume. My normal home router, I usually only modify the DNS servers, turn on DNSmasq, change the DHCP range, add a couple of static leases. Thats it.

In this project I'm guessing it will about the same but I want to have wireguard VPN (somewhere on the network), maybe will need DynamicDNS and I was thinking to have some VLANs. Nothing crazy like packet shaping or IDS. Maybe some logging and some metrics from the routers? Basic stuff like access logs and some CPU/Memory stuff, uptime etc.

Now I have only read about VLAN and never really used them. I was thinking of having a couple of segments like wifi, guest/wifi, IOT, Managment, and then whatever I would call the "rack" maybe "core". With just some simple route rules like don't allow guest/wifi to access management devices. I'm not sure the CPU hit for that or if the switches I use will be L3 and can handle the routing.

For switch do you really need HA? Or could you just load the same config on a 2nd, keep it powered down unless you need it then power it on? This is what I do at my home for router and switch, then you save power all the time and can swap if needed.
I think this will also be a common theme. What can be fault-tolerant vs highly available? It seems even I have mixed up the terms according to my favorite article on the subject. The plan is no single point of failure (at least in the core network). I would prefer fault-tolerant over highly available. If the VPN is on the routers and I can hit the network with a switch being down and I can WoL the switch? That might be something I look at. But if I can just get 2 regular switches and then plug one switch into the other then as long as the switch is like 20 or 30 watts (mikrotik), it might be worth it.

The hardest thing about this project is the dependencies from one layer to the next. What I choose for the routers will greatly influence every other part of the build. I'm actually better with routers than switches for whatever reason. I will definitely be making a sub-thread for switch discussions. I'm not even sure what I need. I think like 12-24 ports should be fine for the main rack? Then I can dangle a switch in the house for WIFI and home entertainment devices? But what about the security cameras and stuff? POE? This is where some investigations and POC's might help. Is a single POE switch more energy-efficient than using DC wall outlets to power the cameras? What about when the Cat5 run is 100+ feet? These little things will end up mattering the most I suspect.

For Server\Hypervisor it's hard to beat the AMD right now for power\performance ratio. 5600x or you can save $ with previous gen. ASROCK Rack motherboard is the big $ item here, and the RAM isn't cheap, but neither is other UDIMM (I guess RDIMM isn't a deal right now either June\2021). Another option E3 v3 will cost MUCH LESS to build, we're talking $150 for CPU+Motherboard and maybe 4 or 8GB if you can find a deal. Less performance, but if you don't have high demand needs you could build 4x of these for nearly the same as one 5600x. If you're going to load them with 32GB (or more in the AMD) well RAM cost is almost the same right now so you're spending that either way.
I will definitely get a thread up soon about this. It's the hardest part of the build for sure. I think it will also require the most creativity lol, I'm even thinking about gutting laptops for nodes etc. I wan't to first figure out the hypervisor. So expect the software thread first. I'm roughly budgeting 200w for core networking. That leaves about 400ish for servers. Lets say 125w's each average load. That's not an easy ask. Throw in 32GB of Ram and 10ish TB of storage per node and it's really getting out of hand. I'm evening trying to figure out what uses the least electricity, LC fiber or passive copper lol.

Once I know the hypervisor then I can better plan the nodes. Pretty sure for Hyper-V failover clusters on S2D it requires 4 nodes minimum. Kubernetes is 3 nodes but can't do live migrations or anything cool. That usually means doing crazy stuff in software to make stateful vm's to work correctly. Not sure about the others because I never touched them. I tried ESXi but always had trouble getting the trial license and stuff to work. Windows Hyper-V has been the easiest so far with the most features so it's what I would like to use.

We are "off grid" power for weeks at a time in winter for over the last 10 years, and I run my entire house on Generator during that time including all networking, cameras, office PC + house stuff (fridges and freezers) and we use around 2G of gas per-day. There's absolutely no way you can do this on diesel it isn't affordable, while a diesel generator will last much longer than a gas you're going to spend way way more than replacing that gas generator on diesel operating cost, for that I would spend the $$ on more solar (and I have). I have 1000s of hours on my original Honda EU2000i and it starts up first pull every time, my second one starts up on the second pull. Solar is where the $$ should go not a $4000 or $8000 diesel generator IMO get a used\deal gas honda 3000i or 6500 if you need 220. If $ was no object then yes, I would have a top of the line diesel backup and 5000G tank don't get me wrong, but at this stage I would (and am) starting to build my solar system. I've run my Honda Eu2000i for 100hrs straight only stopping to put fuel in to keep animals alive in snow storm providing them heat, I really am impressed what these can do and how they hold up.
Will definitely need an emergency generator for those severe storm events. I was originally thinking an RV generator. Something like this is the standard. They typically run on LP/NG and I was thinking of getting one of those huge tanks installed. Not sure about how fuel-efficient these things are compared to gas/diesel etc but it will all need to be looked at in time. Also where I buy land will also play a huge role in what the energy requirements for the property will be and how much naturally can be harnessed and what kinda backup power will be used. At least with a huge LP tank I could get a gas stove and heat the house in an emergency.

I won't unfortunately be at the house to start the generator, so I'm looking for something with a little bit of brains that I can wire a microcontroller into and control myself. That way I can say if batteries is less than 30% run the generator every other hour until the batteries are above 80%. WIll eventually cover all the power stuff in this thread or separately but I do appreciate all the feedback.

Thanks
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,095
642
113
If I were doing this (and I'm considering something similar...in a totally different context), I'd separate my infrastructure into two. Critical and Non-Critical.

Critical - Define it however you want, but again, it is the absolute minimum needed for "whatever".
Non-Critical - Anything not above, and does not affect "operations".

Like @T_Minus said, solar is where it's at. I'll go one step further and add a Tesla Powerwall to it. A fully charged Tesla Powerwall, could run your "critical" infrastructure for weeks, if not more, and then a second power backup like a generator could kick in.

I recently did an inventory of my infrastructure and realized that my critical stuff is less than 300w of consumption. To give you an idea of what that includes:

- Core switch (no HA)
- core router (no HA)
- External WAN bits (like ONT, Starlink power etc)
- A single host running ESXi, with a few critical VMs)
- POE cameras (not all, again, critical only)
- A few minor bits of automation for the house, that are very very low power.

That's it. Other than this, the rest of the stuff goes dark, when the UPS (multiple) runs out of juice.

Edit: This is what my "critical" server looks like. 3x systems in a single 1U chassis, running off a single platinum power supply (very efficient), connected to a dedicated UPS. This pic is actually old and I've made a few changes to it recently, but the form and function remain the same. This 1U box covers:

- router
- domain controller
- critical ESXi host

It's connected to a ICX 6610 (which is on the same dedicated UPS). The 6610 consumes ~90w, this one box consumes ~55-85w (depending on usage).

 
Last edited:

Vesalius

Active Member
Nov 25, 2019
252
190
43
Kubernetes is 3 nodes but can't do live migrations or anything cool.
Kubernetes will have Harvestor from the Rancher/Suse people before long. Currently early at version 0.2, but I am very interested given how battle-tested Kube is and how well-integrated HA storage is via longhorn. If the build is quite a ways off it might be a viable option by then. Should be able to run and manage the world of options from docker-hub and VM's natively in HA from the same small cluster.

  • VM lifecycle management including SSH-Key injection, Cloud-init and, graphic and serial port console
  • VM live migration support
  • VM backup and restore support
  • Distributed block storage
  • Multiple NICs in the VM connecting to the management network or VLANs
  • Virtual Machine and cloud-init templates
  • Built-in Rancher integration and the Harvester node driver
  • PXE/iPXE boot support

 
Last edited:

Levi

Member
Mar 2, 2015
76
5
8
34
Critical - Define it however you want, but again, it is the absolute minimum needed for "whatever".
Non-Critical - Anything not above, and does not affect "operations".
I think for me critical is getting access to some of the smart home / IP cameras. That means all the networking between those devices and WAN is critical. Now I have always had pretty good luck with consumer-grade gear so it's not that I don't trust a single name brand switch or router but it's more like... if I land in a country that I intend to be in for 6 months and then I get an alert that my house is offline... that's going to cause me some anxiety. So if I can use some cheaper consumer gear but in a HA configuration, I will feel pretty confident when traveling. If I would be home most of the time, I would probably have a setup similar to yours.

Like @T_Minus said, solar is where it's at. I'll go one step further and add a Tesla Powerwall to it. A fully charged Tesla Powerwall, could run your "critical" infrastructure for weeks, if not more, and then a second power backup like a generator could kick in.
The new powerwall is like 13.5kw of storage. I haven't finalized how much energy the house or garage will need but I was planning on building a 30kw battery pack from refurbished lipos. Which will probably be a challenge to get several batteries built from random cells to make up packs to create the proper voltage when put in parallel. To skip the hassle and the danger, I would probably get a Tesla powerwall but pretty sure they will only give you one if they do the panels as well? I'm also pretty sure they have not expanded to the east coast yet. Also, they seem to be getting bad reviews on those new solar shingles because they raised the price. Before my IT career I was an electrician. It's likely I'll save money by installing the solar myself.

Edit: This is what my "critical" server looks like. 3x systems in a single 1U chassis, running off a single platinum power supply (very efficient), connected to a dedicated UPS. This pic is actually old and I've made a few changes to it recently, but the form and function remain the same. This 1U box covers:
This is the kinda stuff I come here for. 3 machines in a single box with 1 PSU is my kinda build. This is where it gets interesting. So would it be better to do 2 machines in a single mid-tower ATX case? You know hang a MB from the each side of the case and put a big copper gaming cooler on the CPUs? Because the fans in those 1U chassis are really bad on the watts side. The fans in your picture are 9w a piece. These 140mm are 1.5w each. My first thoughts was I would have to skip all the enterprise stuff to meet my power goals. No dual CPU, dual PSU chassis. So I was thinking of running consumer chassis with consumer cooling but put two systems in each case. I don't even think a mid atx would cut it. I'm looking at around 16 drives per case. A full tower will probably be needed, unless I tape the SSD's together with electrical tape. I did that for a raid 10 SSD file server I built and it seemed to work okay.
 

Levi

Member
Mar 2, 2015
76
5
8
34
Kubernetes will have Harvestor from the Rancher/Suse people before long. Currently early at version 0.2, but I am very interested given how battle-tested Kube is and how well-integrated HA storage is via longhorn. If the build is quite a ways off it might be a viable option by then. Should be able to run and manage the world of options from docker-hub and VM's natively in HA from the same small cluster.

  • VM lifecycle management including SSH-Key injection, Cloud-init and, graphic and serial port console
  • VM live migration support
  • VM backup and restore support
  • Distributed block storage
  • Multiple NICs in the VM connecting to the management network or VLANs
  • Virtual Machine and cloud-init templates
  • Built-in Rancher integration and the Harvester node driver
  • PXE/iPXE boot support

Wow... I'm very familiar with kubernetes and this would be a game-changer. I'm not that familiar with rancher products. I have used k3's on a raspberry pi. Haven't used rancher, long horn or k3os. I'm worried that they are locking you out of the underlying os. Kubernetes runs pretty good on almost any OS once its installed correctly. I might want to have control over additional stuff at the OS layer like metrics and logging. Remember CoreOS? Let me create a thread on HyperVisors and get this cross-posted. I want to try out Harvestor soon!
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
pretty sure they will only give you one if they do the panels as well? I'm also pretty sure they have not expanded to the east coast yet. Also, they seem to be getting bad reviews on those new solar shingles because they raised the price. Before my IT career I was an electrician. It's likely I'll save money by installing the solar myself.
You're mixing about five different things... :) Yes, Tesla won't sell you a Powerwall without getting solar from them, but that doesn't mean you have to get their solar roof (i.e. shingles). Their panels are perfectly fine and very competitive in price, warranties etc. Since you'll need to procure solar anyway, that was why I said, "add powerwall".

p.s I wouldn't install any solar installation myself, even though I could, for warranty (solar company), insurance (homeowners insurance) and logistical (hooking up with the power company, net metering etc etc) reasons. But that's me. It's your call.

So would it be better to do 2 machines in a single mid-tower ATX case? You know hang a MB from the each side of the case and put a big copper gaming cooler on the CPUs?
I thought you wanted a rack?? :)

Because the fans in those 1U chassis are really bad on the watts side. The fans in your picture are 9w a piece.
uhh...you'll be surprised. The fans in my pic are PWM fans (from Dell machines) and they rarely ramp up to full power. When I say rarely, I mean, I've yet to hear them ramp up from absolute idle (~20% duty cycle) at all in over a year. At idle speeds, all 3 of those fans combined, add about 2 watts to the power consumption. :)

My first thoughts was I would have to skip all the enterprise stuff to meet my power goals. No dual CPU, dual PSU chassis. So I was thinking of running consumer chassis with consumer cooling but put two systems in each case. I don't even think a mid atx would cut it. I'm looking at around 16 drives per case. A full tower will probably be needed, unless I tape the SSD's together with electrical tape. I did that for a raid 10 SSD file server I built and it seemed to work okay
While generally true, enterprise stuff has come a long away. A Dell R230 ii idles at <20w out of the box, and you can slap it into a rack right away. Dependeablity is as important as anything else, and while DIY is good, "bad DIY" is just that, bad. No taping drives please...
 
  • Like
Reactions: Amrhn

Vesalius

Active Member
Nov 25, 2019
252
190
43
In regards to home battery backup Ford has my attention with bi-directional power/charging to the home. Especially for those rural folks that can use a truck anyway. First I hope they follow through and this pans out. Next I hope this spreads to all electric vehicles. Seems like a no brainer as an option. Would be a great and more easily replaceable battery supplement to a solar home with a smaller Powerwall/battery back and maybe propane/gas/diesel generator as the 2nd/3rd level backup.

If you opt for the bi-directional 80-amp Ford Charge Station Pro, plus a home management system and an inverter needed to connect to your home, the F-150 Lightning will be able to output 9.6 kw of power through an Intelligent Backup Power function—enough to power the lights and appliances for days.
 
  • Like
Reactions: martini dry