Where to start? $1k budget for CPU/Memory/Ram for ESXI 8 vmware host

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Hi all,

I have a VMware host I've been really happy with till I was barred from upgrading to ESXI 7. I am interested in building a box for ESXI 8 as this probably will be my last ESXI host before I change hypervisors.

My current system:
  • E3-1230 v3
  • X10SLL-F
  • 32 GB memory
  • DELL HBA330 (goes to my SA120 for storage)
  • 2x PCIe to NVME adapters for VM storage
Uses:
  • NAS - Xpenology Host - DSM 7
  • AlmaLinux hosts running ELK stack
  • Portainer
  • random VMs for testing/lab work - Gitia, etc
  • NVR (not currently but planned)


Requests/Requirements
  • I'd like to spend less on power
  • I want on-board 10gbe
  • Out-of-band management is a must
  • Half depth case
PCIe slots:
  • I want to expand to 4x NVMEs
  • DELL HBA330
  • GPU1 - For nvme?
  • GPU2 - place holder
  • 10gbe NIC if not on-board




Questions:
  • Is my budget of about $1000 for CPU/RAM reasonable?
  • What CPU/mobo families should I look at? All the options are very overwhelming.
  • Should I consider an "enterprise" GPU that would let me share GPU capabilities across VMs?
  • What VMware features should I look to take advantage of during this upgrade?
    • I don't run vCenter now because it's heavy. Should I reconsider?
 

alaricljs

Active Member
Jun 16, 2023
197
70
28
on-board 10Gb and less power are not nicely compatible. Would recommend off-board SFP 10Gb and fiber for a decent number of watts.
 

Netwerkz101

Active Member
Dec 27, 2015
308
90
28
Hi all,

I have a VMware host I've been really happy with till I was barred from upgrading to ESXI 7. I am interested in building a box for ESXI 8 as this probably will be my last ESXI host before I change hypervisors.

My current system:
  • E3-1230 v3
  • X10SLL-F
  • 32 GB memory
  • DELL HBA330 (goes to my SA120 for storage)
  • 2x PCIe to NVME adapters for VM storage
Uses:
  • NAS - Xpenology Host - DSM 7
  • AlmaLinux hosts running ELK stack
  • Portainer
  • random VMs for testing/lab work - Gitia, etc
  • NVR (not currently but planned)


Requests/Requirements
  • I'd like to spend less on power
  • I want on-board 10gbe
  • Out-of-band management is a must
  • Half depth case
PCIe slots:
  • I want to expand to 4x NVMEs
  • DELL HBA330
  • GPU1 - For nvme?
  • GPU2 - place holder
  • 10gbe NIC if not on-board




Questions:
  • Is my budget of about $1000 for CPU/RAM reasonable?
  • What CPU/mobo families should I look at? All the options are very overwhelming.
  • Should I consider an "enterprise" GPU that would let me share GPU capabilities across VMs?
  • What VMware features should I look to take advantage of during this upgrade?
    • I don't run vCenter now because it's heavy. Should I reconsider?
(alternate?) Options:

Swap CPU to an E3-1200 v4 CPU and you should be able to get to ESXI 7.0.3
Swap to your new hypervisor now if it supports your current hardware.

Anything new (MB + CPU + RAM + Chassis) will be over $1000 (if parted)
 

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Dual Xeon v4.

I'd also think about firing VMware right now. Down that path lies doom. You know that Broadcom switched them to a subscription model, right?
yes very aware of the Broadcom mess.

I'm very happy with ESXI 7 and want to play around with ESXI 8 and modernize my hardware.

I'll test other hypervisors during my search too as don't plan on staying on ESXI 8 for long.
 

nk215

Active Member
Oct 6, 2015
412
143
43
50
You can get a dual E5v4 CPU setup with that budget all-inclusive. here's an example:

https://www.reddit.com/r/homelabsales/comments/18mbzrq

- Dual 2650 V4 Intel Xeon CPUs

-128GB 2400T ECC Registered DDR4 2400Mhz RAM

-HBA 330 Controller

- 2 x Dell 3.5" hard drive caddies

-ALL 3 PCIe Expansion Risers included- Risers 1,2 and 3.

-X520/I350 10GB Networking Daughterboard ( 2 x 10GB SFP+ , 2 x 1GB RJ45)

-Dual High Output 1100Watt 80+ platinum Power supplies- plenty of juice for 12 drives.

-Idrac Enterprise

$550 ALL IN, SHIPPED
 
  • Like
Reactions: drdepasquale

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
My current system:
  • E3-1230 v3

Requests/Requirements
  • I'd like to spend less on power
PCIe slots:
  • I want to expand to 4x NVMEs
You want less power usage but want more NVMe's ? To be able to do that the power saving of the new cpu would need to be significant, some nvme (non m.2) can draw up to 25W's...

Given that you're running an E3 now I don't think there is much to gain there.
Yes, in line update to a newer gen (E-2100/2200/2300 or the brand new ones) will provide more features and potentially lower power @ performance equality (or same power more performance).

Also bear in mind that E3 line's usually are low on PCIe lanes, so running 4 NVMe (assuming @ x4 will take 16 lanes) + 10G (x8) + a HBA (x8) ... + potentially a GPU? might be difficult...

The X11SRA will need a Xeon-W CPU to fully utilize those slots as indicated. That CPU won't run at 1230v3 power levels.
Switching to dual E5's as suggested here will double/tripple your power draw.

Long story short - either you need to scale down requirements or accept a higher power draw.
 

Alfa147x

Active Member
Feb 7, 2014
192
39
28
You want less power usage but want more NVMe's ? To be able to do that the power saving of the new cpu would need to be significant, some nvme (non m.2) can draw up to 25W's...

Given that you're running an E3 now I don't think there is much to gain there.
Yes, in line update to a newer gen (E-2100/2200/2300 or the brand new ones) will provide more features and potentially lower power @ performance equality (or same power more performance).

Also bear in mind that E3 line's usually are low on PCIe lanes, so running 4 NVMe (assuming @ x4 will take 16 lanes) + 10G (x8) + a HBA (x8) ... + potentially a GPU? might be difficult...

The X11SRA will need a Xeon-W CPU to fully utilize those slots as indicated. That CPU won't run at 1230v3 power levels.
Switching to dual E5's as suggested here will double/tripple your power draw.

Long story short - either you need to scale down requirements or accept a higher power draw.
Thanks for working with me. I chose a higher-power draw.

My thought process is I want more NVMe slots for future expansion without having to replace the drives I have today - Unless I can find a nvme DAS kinda like my lenovo SA120 for $300.

I also understand that adding one or two GPUs will add to power consumption too.

with nvme, GPUs, HBAs, and NICs I'm hoping to budget room for future expansion - I understand that each of these would consume more wattage.

My original comment on power consumption is that I'd like the mobo/CPU power consumption to be less than my current Xenon E3-1230 v3. It's become clear that this is a tall ask with my budget.

I'm not open to dual CPU options. I don't have enough sustained compute for that level of CPU.

This is an annual exploratory thread I open here, and I'm leaning towards spending my monies elsewhere on other upgrades and sitting this round out.

Honestly, I've had a wonderful time with VMware ESXI in a homelab capacity (my professional experience is a mixed bag) - I would like to experience esxi 8 as VMWare's peak before whatever mess emerges from Broadcom.


---



Is AMD a better place to look to for ATX mobos, PCIe lanes, and power consumption in low-end chips?
 

CyklonDX

Well-Known Member
Nov 8, 2022
848
279
63
Is AMD better pcie-lanes, and power usage ?
Depends on the tier you are looking at.
Ryzen system? Sure you could do that - most likely under 1k.
It will have better efficiency, as well as more pcie lanes.


but

idrac/bmc management ... well thats not going to be an easy thing for ryzen system - its a desktop afterall *unless you spend extra buck, and look up management boards on pcie - there used to be some pi boards acting as one.

then comes your requirements, you are going to be hard pressed. 4 nvme's? 2 gpu's? SAS controller? and 10Gig nic? I don't think thats possible with available pcie lanes; to accomplish that you would need dual cpu (cheapest), epyc/threadripper, or x99 platform (and thats potentially your winner due to price) - in terms of wattage, its going to be more power hungry than your current system - but any of those will be in reality.


x99 combo (mobo+cpu) can be bought for somewhere 150-200 usd
You can use any xeon v3/v4 in there (including low power ones - 50W tdp minimum on load)
(there are some chinese boards)
Up-to 40 pcie lanes

ex. (to get more lanes you would need xeon)
 
Last edited:
  • Like
Reactions: Alfa147x

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
So I was going to write about compatibility perspectives (looking at the updated support matrix (https://kb.vmware.com/s/article/82794)) and then ended up at searching for support details re older versions and there is nothing (yet) beyond will "honor existing support contracts".

That does not say anything about providing a free vSphere option in the future, nor making patches available to the public (i.e. without a paywall).

Now, End of (general) support for ESXi 7 is 2025-04-02 , for 8 its 2027-10-11 (according to Product Lifecycle Matrix).

So, unless there are features on 8 that you're desperate to try I think at this point in time (without too much information on how "free/without a support contract" options can be patched in the future) I don't think building a new 8.x box is something I'd recommend.
You have 16 months before your 7.x runs out of patching, in that time it will become clear if access to those patches will still be available (legally) and how this whole story develops. This leaves you plenty of time to look at alternatives.

If you're set on upgrading (need to have some costs in this fiscal year, bitten by the upgrade bug) then you need to decide if you need/want supported or if deprecated (works with warning) is ok for you. If so, the next step would be to determine how many pcie lanes you really need (16 nvme, 8 10g+, 8 hba, 16 per GPU and some for USB/sata etc) as that decides the platforms (Ryzen are not officially supported but will run).
Main issue is going to be the GPUs, whats the use case for those? Desktop use? Plex? ML?

Also memory concerns (ECC or not, and max amount, although coming from 32 probably means u don't need too much)

What might help would be to get your current power draw?

I run a couple of Fujitsu TX1320 M3's (E3 v6, 64G, 2 NVMe [vsan], 25G cards, single sata ssd) which average at ~70Ws. I downgraded from E5 v4's after seeing what my intended upgrade (Cascade Lake Xeon-W's) would draw. I also don't use that much CPU (4 E3v6's is plenty overall), but am significantly memory limited [with vsan taking much more in esxi7 than it used to in 6).
The basic power draw of these Fujitsu's is like 20W (board+cpu), a E5-1650v4 is drawing like 50W (boot) and a skylake CPU was drawing at 70W (boot). Maybe the boot wattage gets reduced after power savings kick in, but didnt have an OS on these boxes when I measured yesterday.

Sorry for the possibly derailing content, just wanted to give some perspective.
 
Last edited:

Railgun

Active Member
Jul 28, 2018
148
56
28
on-board 10Gb and less power are not nicely compatible. Would recommend off-board SFP 10Gb and fiber for a decent number of watts.
Unless you find a board with SFP ports. They exist. I’m assuming you’re assuming onboard = copper, which isn’t always the case.
 

zachj

Active Member
Apr 17, 2019
159
104
43

Supports 16 cores and 128gb of ram.

It has ipmi and 10g networking as per your request.

It has 2 m.2 slots onboard and 12 SATA ports—should solve any storage requirement you have without needing an HBA.

the only thing you’d need PCIe slots for is gpu, and using a ryzen 5700g/5600g gets you one without needing a slot at all.

if you want official ecc support you can get the 5650g/5750g instead.
 

hmw

Active Member
Apr 29, 2019
581
231
43
There are still places selling ESXi 8 essentials - that gets you ESXi 8 with a perpetual license. The S&S is for 3 years and VMware will honor that

Having said that - cheap H12SSL or S8030 motherboards abound, with equally cheap Zen2 EPYC CPUs. So something like

S8030GM2NE-2T
7302P
Rosewill RSV-L4000U
Seasonic 750W Platinum

This gives you 16 cores of EPYC with 2x10Gbase-T

To this you can add a RTX4070 or RTX4080, the S8030 also gives you 2x SlimSAS8i so you can add 4xNVMe drives. And the board has 12x SATA connections along with 2x Gen4 M.2 slots

My specs are quite similar but keep in mind this all comes at a cost of ~ 150W of power idle
 
  • Like
Reactions: Alfa147x

hmw

Active Member
Apr 29, 2019
581
231
43
Newegg is doing the Rosewill for $99 - https://www.neweggbusiness.com/product/product.aspx?item=9b-11-147-327

eBay has the 7302P combo for $599 - TYAN S8030 GM2NE Motherboard + AMD EPYC 7302P 16-Cores CPU Processor Combos | eBay
or even cheaper- https://www.ebay.com/itm/175428736161

zen2 is cheap and will get cheaper. The uplift from zen2 to zen3 is like 7~14% so you’re not missing much

some forum members have reported success buying from China & a few vendors like tugm4470 but as always YMMV. I’d source local and ensure the CPU is unlocked

128GB memory is ~ $272 - https://memory.net/product/m393a4k4...r4-3200-rdimm-pc4-25600r-dual-rank-x4-module/
don’t buy 8 DIMMs unless you absolutely need the bandwidth. That way you can upgrade the memory later instead of having to replace all 8 DIMM modules

For your cpu cooler you can use a Dynatron or Supermicro 4U - https://www.amazon.com/Supermicro-SNK-P0064AP4-EPYC-Socket-Brown/dp/B078ZJSM65

STH has loads of info on EPYC motherboards. Don’t buy a H12SSL UNLESS you can get a revision that is 1.02 or something that doesn’t have the problematic BMC regulators.


as for vGPU - if you want a gpu that you can share across VMs, NVIDIA has some stiff licensing fees. If only one or two VMs at a time need GPU access, put a 4070 + an ARC770 and dedicate those to the corresponding VMs. There’s resources on discord that show you how to use nvidia’s shared GPU but it doesn’t work with the latest generation and again - it’s last-gen GPUs, time limited trial licenses or then crazy licensing fees per VM

choose your PSU wisely. You can save 10 ~ 20 watts using a PSU that will do > 30% load at system idle. Which means getting a 700W psu and not a 1200w redundant monster power supply. But a smaller PSU means you must precisely calculate the max load else risk blowing up your power supply
 

Railgun

Active Member
Jul 28, 2018
148
56
28
if it is for home use, ESXi 8 as a trial, and you can indefinitely renew the trial license.
 

zachj

Active Member
Apr 17, 2019
159
104
43
Unless broadcom cancelled the program you can get a free perpetual ESXi hypervisor license.

it’s just not compatible with vcenter and you can’t connect to it via API; really not a problem for a home server.
 

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Yup, for home use. Sorry, I assumed all talk here was for the home, but i should've specified.
 

hmw

Active Member
Apr 29, 2019
581
231
43
Normal "home" use vs homelab use - you might want > 8 vCPUs per VM, access to backup APIs or then the ability to use SR-IOV or use vMotion. These features need specific license tiers that you don't get with the free ESXi license

VMUG advantage is still operating and offers a 365-day eval for homelab/personal use for all the enterprise VMware products
 

Railgun

Active Member
Jul 28, 2018
148
56
28
As mentioned, the trial (60 day) gives you everything, and it’s a three line script to reset the trial clock, so for home(lab) it’s an easy way to get what you may need.