Hardware suggestions to reduce power usage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

g4m3r7ag

New Member
May 3, 2019
8
0
1
Hello,

I’ve been running my home lab as it stands for a couple of years now. The hardware is starting to get dated and I would really like to reduce my power usage and also increase some redundancy.

Currently I have an R210 II with a Xeon e3-1220, a single cheap SSD and 8GB of RAM running pfSense as my main router/firewall.

An R710 with dual Xeon L5640’s and 96GB of RAM with a raid 5 array of spinning disks. This is running my sole ESXi node with 10-15 VMs. ESXi is installed on a flash drive on the internal USB.

A second R710 with dual Xeon L5640’s and 108 GB of RAM and a mirrored cheap SSD array running FreeNAS. It is connected to a 45 bay SuperMicro JBOD enclosure with ~12 spinning disks.

These are the heavy power usage/noise/heat generating devices. A couple of smaller IoT hubs and a switch and the rack has a total usage of 6-8amps/800-1000 watts. I’ve currently eliminated the need for the SuperMicro JBOD by utilizing a Gsuite business account for the majority of my storage needs.

I’m looking to setup a 3 node VSAN cluster. I’m not sure how feasible it is to run two virtualized pfSense instances in CARP or if I should do 1 VM on the Cluster and a separate physical pfSense node. I was thinking I could also pass through an HBA on one of the nodes to have a 5-6 RaidZ2 array for my data I would like to have a local copy of with a backup on the Gsuite drive.

One of the things I’m struggling with is the hardware to reduce my power usage and heat/noise generation. I initially was thinking of using Xeon e-2136g’s but anything from that series seems hard to come by. I would like have console ability so I believe that restricts me to a super micro board unless I go with another enterprise chassis but that might make it harder to reduce the power/heat/noise. I’m assuming I would need a 2–4U enclosure to keep noise to a minimum. I’m thinking roughly $1,000 per node cheaper would obviously be better but DDR4 gets expensive. Ability to add 10Gb SFP+ NICs is required. What CPU generation and chassis would be recommended?

My other question is, is it possible to setup one node with an SSD data store and migrate my existing VMs from the R710 to that, then add the other nodes after a couple of months with their own SSD data store and convert it into VSAN without losing the data store or would I need to migrate all my VMs to an external data store create the VSAN data store and migrate back? Is this a sound idea or do I need to setup all three nodes at once?

Thank you for any help that can be provided.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
If you want to reduce power usage, dump the rackmount OEM gear and get some single CPU Supermicro gear. I built my whole homelab with a 3 machine cluster, all built on X9SRL boards running E5-2648L CPUs and 128GB of DRAM each.

I have one central 8 disk, 4 ssd FreeNAS box on a Xeon-D 8-core with 64GB. The whole lab runs under 500W total.

Whole thing sits on a few shelves behind the door in my laundry room, dead silent, no active cooling.



Inside each hypervisor:


All networked with 10GigE.

-- Dave
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
How are you liking those 2 case on the right @acquacow ? I was debating trying it out, I've used the DS380 before but not the other style.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
There's not much inside them, I removed the 5.25/hdd cage, as I don't need them. These boxes are all flash and I just double-side tape the SSDs together in a stack, or use PCI-e flash.

I like that I can dedicate left to right airflow with the cases and don't have to worry about my wife/etc putting something in front of them and blocking airflow.
 
  • Like
Reactions: T_Minus

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
1) I second the single CPU supermicro board option, I'm almost done building my second unraid box (for backup) using the X10 chipset. Both the boards I chose have at least four 8x slots for SAS cards and a 10g nic. I was previously using two R420s with L version CPUs but the suckers were loud, power hungry and generated a bunch of heat. I was looking more for large slow storage rather than VM usage (though I have a bit of mixed), I'm now under 30db for both of my boxes and theres no longer that 40mm whine from the dell jet engine fans~they do still generate a bit of heat but its much more manageable and quiet enough I'll likely move it into my upstairs living area.

2) On the vSAN notation the disks have to be dedicated for vSAN so all data will be wiped when the datastore is created, you'll need to migrate it off, you could however potentially create your initial host, make the VMs, then later make a 2 node cluster with a witness, move the VMs to it, and then add the original node in.
 
  • Like
Reactions: T_Minus

g4m3r7ag

New Member
May 3, 2019
8
0
1
@acquacow How is the power usage on those X9's? Even being a low power Xeon it's still only two years newer than my L5640's I don't suspect the idle efficiency would be much better then what I currently have if I had three of those running? The dual L5640's should have a combined PassMark in the realm of 12,000. I'm having a hard time finding a Xeon in the 12k PassMark range from the last couple of years that isn't $500+. The Xeon E-2136 fits the bill at only a little over $300 but they're impossible to find.

I've been debating on forgetting about the IPMI access and building three Ryzen 5 1600 nodes. I can probably build three nodes for close to what one Xeon/SuperMicro node would and would be extremely power efficient.
 

llowrey

Active Member
Feb 26, 2018
167
138
43
There's no reason to forgo IPMI (or ECC) with Ryzen. Check out the ASRock Rack X470D4U (or X470D4U2-2T if you want onboard 10GbE). I have one running a Ryzen 5 3600 with DDR4-2400 ECC UDIMMs that are overclocked to DDR4-3200. I could probably get the RAM to 3600 with a little fiddling but haven't set aside the time to do it.

My idle power, at the wall, is a bit steep at 78W but that includes ~40W worth of HDDs, NICs, fans, and other sundry items.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
X9 series gear you could go for something like e5-2690v2 that has a passmark around 16,400 or something newer.

d-1540 passmark 10,740, or d-1541 at 11,060 would be a super low power option, isle ~30-35 watts, was some cheap ones around a while back, now should also be still able to find cheap.
 

g4m3r7ag

New Member
May 3, 2019
8
0
1
@llowrey I saw numerous reviews and posts saying that after a couple weeks the IPMI dies out and requires a full power cycle to start functioning again? Have you experienced that?

@Evan I initially thought about a Xeon-D but I don’t recall any that had more then one expansion slot or if they did it was just an X1 and an X8/16. However some do have multiple NICs and SFP+ at that so that would eliminate the need for most of the expansion cards. Would likely just need a PCIe to NVMe card if the board has all the required NICs. I may have to look through those models again.
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
If power and cost are concerns I would reconsider vSAN. Based on your current setup unless you are planning to add a lot of VMs in the near future 3 nodes are likely overkill and that means up front cost and ongoing idle power that are mostly wasted. You'll also see a lot of posts about disappointing performance on 3 node homelabs when using vSAN. Of course if you're just really interested in trying out vSAN for the experience and don't want to virtualize a test setup there's nothing wrong with that, just understand that you'll be paying a decent amount extra for that experience.

If you want vsphere redundancy look at other options like HA or if uptime is more flexible, having two nodes and a good backup schedule.

On the pfsense front you can run two VMs in CARP just fine. There are a few settings you need to configure in vsphere but there are guides online for this.
 

g4m3r7ag

New Member
May 3, 2019
8
0
1
@Aestr the problem I’m having with a non vSAN setup is redundant shared storage. If I have a share from FreeNAS or a Synology and need to update either I either have to migrate the VMs to different storage or shut them down to do a DSM or FreeNAS upgrade. Unless there is a way to work around that problem that I’m not seeing and still have two node redundancy so I can upgrade one ESXi host without having to bring any VMs offline?
 

llowrey

Active Member
Feb 26, 2018
167
138
43
@llowrey I saw numerous reviews and posts saying that after a couple weeks the IPMI dies out and requires a full power cycle to start functioning again? Have you experienced that?
I had issues with IMPI failing during reboots. It would go off-line for about 10min. I discovered that the issue went away after I unplugged the monitor. The IMPI/KVM interface works so well that I haven't needed a screen attached.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113


[URL="https://www.supermicro.com/en/products/motherboard/X10SDV-7TP4F"]X10SDV-7TP4F | Motherboards | Products | Super Micro Computer, Inc.[/URL] ?

[QUOTE="g4m3r7ag, post: 243770, member: 22128"][USER=3011]@Aestr
the problem I’m having with a non vSAN setup is redundant shared storage. If I have a share from FreeNAS or a Synology and need to update either I either have to migrate the VMs to different storage or shut them down to do a DSM or FreeNAS upgrade. Unless there is a way to work around that problem that I’m not seeing and still have two node redundancy so I can upgrade one ESXi host without having to bring any VMs offline?[/QUOTE]

Totally an issue that I also have - at this point I have resolved to build either
-a very fast ZFS shared storage and a relatively low performing vSan cluster (=4/5 hosts [had 2+1/3 cluster node vsans before and did not really like these)
-a HA capable two host fast ZFS shared storage and a two host ESXi cluster (maybe with vSan witness offsite for real 'ZFS' emergencies)

Low power will have to come from individual nodes which means fit to size (not the overkill I have now) so know your workload and requirements and build accordingly...[/user]
 

g4m3r7ag

New Member
May 3, 2019
8
0
1
I'm confused, why would you build a ZFS shared storage and a vSAN cluster? What kind of problems did you have with 3 node vSAN? I don't need insane IOPS just the ability to run upgrades without knocking stuff offline.

Anything I can look ito for HA ZFS? The few things I saw on a quick search all wanted a JBOD with multipath SAS drives which doesn't really work for me. Is there an option that will replicate the pools and be an active/passive failover setup?
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
People have different performance expectations but in general with a small vSAN config performance so much worse than the parts suggest it’s just sad.
vSAN is intended to provide a fair I/O to a lot of VM’s but fails miserably as a small number of VM’s wanting faster low latency access as you may expect in a small home cluster, or single person development environment.

somebody here reported that it was possible to benchmark good figures in a small config but it was 100% NVMe with everything on the approved hardware list. But remains to be seen how it really is.
 

g4m3r7ag

New Member
May 3, 2019
8
0
1
Any recommendations for HA storage for a 2 node vSphere cluster that's not a loud power sucking out of support enterprise SAN? Something DIY? I would like to be able to complete OS upgrades without having to shutdown the VM's.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Any recommendations for HA storage for a 2 node vSphere cluster that's not a loud power sucking out of support enterprise SAN? Something DIY? I would like to be able to complete OS upgrades without having to shutdown the VM's.
In a home lab? really?

Either setup a 3rd node and vSAN it, or build yourself a gluster setup exporting NFS mounts to the ESX boxes.

If you don't want to pay the VMWare tax, install oVirt with hyper-converged gluster.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I'm confused, why would you build a ZFS shared storage and a vSAN cluster? What kind of problems did you have with 3 node vSAN? I don't need insane IOPS just the ability to run upgrades without knocking stuff offline.

Anything I can look ito for HA ZFS? The few things I saw on a quick search all wanted a JBOD with multipath SAS drives which doesn't really work for me. Is there an option that will replicate the pools and be an active/passive failover setup?
Active passive is covered in https://forums.servethehome.com/index.php?threads/zfs-vcluster-in-a-box.22460/ Option 1.2 (as well as active active with MPIO).

Re: Why a vsan Cluster in addition to a HA ZFS setup - contingency only (If I have 2 esxi nodes up already its not an issue to put a witness somewhere since i already have the hardware.Maybe just for a while until I get comfy with the HA setup. At this point planing stage only, still run 4 node vsan. Just not happy with it (performance)).

Re Whats wrong with 3 node vsan
A 3 node vsan backup will not allow higher redundancy than 1 to be configured in vsan policy (iirc). This means basically one copy + witness checksum. So whenever one node is down (itentionally or not), you cannot start or move vms since they would be in violation to vsan policies. If two nodes go down for whatever reason the cluster wont work at all.
O/c all that is nothing that really is an issue if you exercise proper caution (no accidental reboots due to power issues, switch problems, hw failures; pre test compatibility of driver or esxi updates with your hardware; proper functionality test after an esxi update to see if they bogged driver or networking again, dvSwitch is acting up again [going async or not accepting a host any more] etc), but I had lots of issues with 3node vsan and significantly less with a 4 node setup; simply to the fact that it allows regular (fully redundant) operations even if a node is not working properly (which happens more often that I imagined).

O/c the less you tinker with it the less likely it is that you have an issue.
 

g4m3r7ag

New Member
May 3, 2019
8
0
1
Active passive is covered in https://forums.servethehome.com/index.php?threads/zfs-vcluster-in-a-box.22460/ Option 1.2 (as well as active active with MPIO).

Re: Why a vsan Cluster in addition to a HA ZFS setup - contingency only (If I have 2 esxi nodes up already its not an issue to put a witness somewhere since i already have the hardware.Maybe just for a while until I get comfy with the HA setup. At this point planing stage only, still run 4 node vsan. Just not happy with it (performance)).

Re Whats wrong with 3 node vsan
A 3 node vsan backup will not allow higher redundancy than 1 to be configured in vsan policy (iirc). This means basically one copy + witness checksum. So whenever one node is down (itentionally or not), you cannot start or move vms since they would be in violation to vsan policies. If two nodes go down for whatever reason the cluster wont work at all.
O/c all that is nothing that really is an issue if you exercise proper caution (no accidental reboots due to power issues, switch problems, hw failures; pre test compatibility of driver or esxi updates with your hardware; proper functionality test after an esxi update to see if they bogged driver or networking again, dvSwitch is acting up again [going async or not accepting a host any more] etc), but I had lots of issues with 3node vsan and significantly less with a 4 node setup; simply to the fact that it allows regular (fully redundant) operations even if a node is not working properly (which happens more often that I imagined).

O/c the less you tinker with it the less likely it is that you have an issue.
Ok I've been reading through the napp-it vCluster documents. It sounds like I could setup a host with ESXi and some NVMe drives and use the shared disks to create h1 and h2 but when I need to update that host it will need to be brought offline so I'm trying to look at how to do it with two hosts and all I'm seeing that I understand is to use multipath SAS drives with a connection to each ESXi box. Is there a way to setup two ESXi hosts with SATA/NVMe drives to present h1 and h2? So that to upgrade ESXi on those hosts one could be brought offline without having to make the storage unavailable to the VM's on the compute ESXi hosts?

At that point I would have 4 hosts though so not sure if it would just be better to go with a 4 node vSAN?