I have what I would consider to be a nice setup that fits my needs and has redundancy (I can take one server offline with no major interruption, minus VMs that require a passthrough device on that specific host). I'm running PVE + Ceph, so both VMs and data can migrate seamlessly.
The only problem is, the three servers use 550-600w at my baseline load. That's about 18 VMs. Combine that with general networking hardware, and I'm looking at about 700w. That translates to somewhere around $60 a month of electricity. Sure, I'd be paying an order of magnitude more at any cloud provider, but I'd still like to reduce it a bit.
The specs are:
As I said, I like that any of the three hosts can go down for maintenance or tinkering and workloads seamlessly migrate. Everything is fast enough, and the NVMe-based Ceph also gives me a nice fast SMB share.
I've looked into upgrading to a newer platform, but there doesn't seem to really be a modern equivalent of the Xeon W-2100/2200 series anymore - modern Xeons are just massively overkill for what I need, same for Threadripper. But then if you drop down to a normal desktop platform, the PCIe lane situation is dire. The 28 lane standard of desktop CPUs nowadays really isn't enough, and it's hard to know if the platform will support proper hotplugging and such until you buy it. I like the 48-lane sweet spot of the LGA-2066 platform, plus the out-of-the-box support for hotplugging and backplanes that the X11SRM-VF gives me. Even Xeon-Ds seem to have "evolved" into the "Xeon 6 SoC" and got a lot more power hungry - 150W TDP for the humble 8C model? Really? I don't need 88 PCIe lanes.
Is there something I'm missing here? I've considered just using a PCIe switch + epyc 4004, but then the switch itself has power draw...
The only problem is, the three servers use 550-600w at my baseline load. That's about 18 VMs. Combine that with general networking hardware, and I'm looking at about 700w. That translates to somewhere around $60 a month of electricity. Sure, I'd be paying an order of magnitude more at any cloud provider, but I'd still like to reduce it a bit.
The specs are:
Code:
pve1: ~225w
Supermicro X11SDV-16C-TP8F
Xeon D-2183IT (16C32T)
4x32GB DDR4 RDIMM
Only using onboard networking
SAS3008 + 82885T expander
9 3.5" HDDs, 2 U.2s on a riser card, 4 SAS/SATA SSDs
pve2: ~150w
Supermicro X11SRM-VF
Xeon W-2150B (10C20T Apple OEM with lower TDP)
4x32GB DDR4 RDIMM
ConnectX-4 Lx 25GbE (running at 10gb)
SAS3808
3x U.2 via NVMe ports, 2 SAS SSDs for boot (will eventually have some spinners in here as well)
pve3: ~185w
Supermicro X11SRM-VF
Xeon W-2145 (8C16T)
4x32GB DDR4 RDIMM
ConnectX-3 40GbE
SAS3008 + Expander built into BPN-SAS3-826EL1-N4
4x U.2 via NVMe ports, 5 or so spinners, 4 SAS/SATA SSDs
Currently not using the front BPN-SAS2-846EL1 backplane, but will eventually move move of the spinners from pve1 to this one (the drives are passed through to a specific VM).
host for router VM + a couple other network-related VMs: unknown power draw
Supermicro X10SDV-8C-TLN4F
Xeon D-1541 (8C16T)
4x16GB DDR4 RDIMM
Onboard networking + X550-T2
One M.2 NVMe drive
Switches and others:
Mikrotik CRS326-24S+-2Q+RM
Microtik CRS328-24P-2S+ (this one gets a pass on power draw since it's PoEing things in other rooms)
Cable modem
Mini PC for home automation (negligible power draw)
I've looked into upgrading to a newer platform, but there doesn't seem to really be a modern equivalent of the Xeon W-2100/2200 series anymore - modern Xeons are just massively overkill for what I need, same for Threadripper. But then if you drop down to a normal desktop platform, the PCIe lane situation is dire. The 28 lane standard of desktop CPUs nowadays really isn't enough, and it's hard to know if the platform will support proper hotplugging and such until you buy it. I like the 48-lane sweet spot of the LGA-2066 platform, plus the out-of-the-box support for hotplugging and backplanes that the X11SRM-VF gives me. Even Xeon-Ds seem to have "evolved" into the "Xeon 6 SoC" and got a lot more power hungry - 150W TDP for the humble 8C model? Really? I don't need 88 PCIe lanes.
Is there something I'm missing here? I've considered just using a PCIe switch + epyc 4004, but then the switch itself has power draw...