ESXi8 is now free again.. Really .. Bang and my last ESXi server has been decommisioned. Dropped off all my VMware books and materials at the recycler

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Greg_E

Active Member
Oct 10, 2024
253
63
28
Because of the container workflow, I may want to try Harvester at some time in the future. Need to move to a higher paying job and one that might actually value what I have to offer. Working on that going forward. Having some experience with vSphere, XCP-ng, and hopefully Nutanix and Harvester would give a lot of flexibility in what's needed now, and where to go in the future to get out from under Broadcom's thumb. A lot of places were able to get a "decent" rate for a 3 year lock in, but they were told that pretty much the next contract is going to be "different". VCF is the only way forward, they don't want to mess around with piece meal contracts for three or four little parts of the whole.

If you were using the whole thing before, this new way might be pretty close to what you were paying. If you only bought Enterprise +, then you pretty much double or triple the cost (if you were lucky). Forming buying groups is the only decent way to combat this, which is what a lot of state college systems have done, since they are all funded by the state, it's easy to claim the state owns all the licenses and can get a larger volume to reduce the price.
 
  • Like
Reactions: Captain Lukey

Greg_E

Active Member
Oct 10, 2024
253
63
28
Just as a follow up, I haven't made any progress. My VMUG licenses run out next month, and I haven't studied enough to pass either VCP-VVF or VCP-VCF exams. Thinking I need to move on to something that's more friendly to the lab users and values us in the community by providing free instructions towards their exams. Jumping through the Broadcom hoops is just exhausting and so far no luck on applying what I have managed to scrape up and learn.

I am taking part in a lot of the VCF webcasts provided to my company, I weaseled in even though I'm not in that department... VCF 9 is going to be "different" and from what I'm guessing, things like distributed switches and port groups will be gone by VCF 10. They are pushing NSX, all VCF 9 will be required to install NSX even if you don't use it. That alone tells me they are eventually going to deprecate all the old distributed networking and only allow NSX. Might not be version 10, but I'd guess by version 11 this will be fact. This is certainly a step forward, but also a step harder for people to grasp.
 

Captain Lukey

Member
Jun 16, 2024
56
17
8
You’re 100% correct in my view. The way I see it, they want to reduce the amount of code and product teams to support, so it looks like they’re starting to deprecate all the older distributed networking features and only allow NSX. Of course, we could be wrong . Why not take a fresh look at Kubernetes at scale? That seems to always be in demand (and $$$). ps.. Look at how much NSX cost ? Taxi
 

Greg_E

Active Member
Oct 10, 2024
253
63
28
I'm going to look at Harvester HCI and the K3S expansion in the near future. Not sure if hyper-v cluster, Harvester, or Nutanix will be next for me to work with.
 
  • Like
Reactions: Captain Lukey

Captain Lukey

Member
Jun 16, 2024
56
17
8
I really like Harvester HCI—it’s great! The BUT is that it was slower on my server compared with running Kubernetes k8s (not lK3s like you are using) on Proxmox with memory ballooning and native K8s. It’s definitely worth spinning up Prometheus and Grafana to measure performance, and also keeping an eye with htop locally. When you have 90+ pods running, speed really matters, and DRAM is your friend. Good luck!
 

Rand__

Well-Known Member
Mar 6, 2014
6,697
1,813
113
Slightly off topic, sorry, but do Harvester and or Proxmox have a dvSwitch equivalent? I never really looked for it but that is one of the migration blockers in my mind. I dont have a complicated setup (single box active only atm) but its flexible so that vms can just run anywhere I put them (power on a node anywhere i have a box) and I'd hate to loose that...
 

Greg_E

Active Member
Oct 10, 2024
253
63
28
I am 1/4 to 1/3 through the Harvester user guide (wish they had a PDF or epub version) and like Nutanix I'm concerned about my lab hardware. Currently running everything on HP T740 thin clients (AMD v1756b 4c/8t) with 64gb of ram, works fantastic for a small XCP-ng cluster or small vSphere cluster. Not so sure about VCF though, concerns above.

I haven't had any issues with distributed switching on XCP-ng, if the interface or bond has access to the vlan, your VM will have network access. I believe (not 100%) that anything you build once you have Xen Orchestra up and hosts joined to the pool (cluster) will produce a distributed network. It's super easy to get up and running, and very light weight on the hosts leaving a lot of performance for the VMs. The only current issue is network performance with AMD Epyc processors, and I believe the fix is nearly here. They recently fixed Ryzen but I haven't tested because my lab has been down all summer. Proxmox has some advantages, XCP others, I just like how faster you can get an xcp pool set up and work being done. That said one of the things about xcp is that it really needs shared storage or paid Xen Orchestra for HCI storage, I think Proxmox might make this easier, but could be wrong.
 

Captain Lukey

Member
Jun 16, 2024
56
17
8
Wow!! .. The HP T740 thin client is a surprisingly capable little box, compact, efficient, and well-suited for homelab use despite its limited core count. To get the best performance out of it, the key is to minimize overhead and focus on lightweight workloads.
Hypervisors, If you prefer traditional virtualization, both ESXi and XCP-ng run well on the T740 with modest resource usage. However, if you can, containerization is even better. Running your workloads directly in LXC or Docker maximizes efficiency and makes the most of the available cores and RAM. Proxmox, Proxmox VE is still a strong choice here, but it’s worth trimming it down. Disable features you don’t need (HA, clustering, metrics, etc.) and keep the setup lean. On the T740’s AMD CPU, Proxmox runs stably with the latest updates, and you can comfortably run multiple VMs and containers without issue.
As an example, a homelabber friend of mine Chris MR I have no DRAM left, runs five T740s — all stock, without upgraded RAM — and has no problems running multiple VMs and LXC containers on each node. Stability has been rock solid.

From memrory (i feel on run on swap space) BIOS defaults are fine. Memory overclocking isn’t supported on the T740, so there’s nothing to tweak here.
  • IOMMU is enabled by default (set to “Auto”), which is handy if you plan to use PCI passthrough.
  • Keep power settings stock unless you’re specifically tuning for ultra-low idle draw.


Good luck with Nuntanix that is alot of code to run on a small box.. :) i dont think it meets there minimum requirements.. less is more :)