Did anyone here ditch vSphere for Proxmox, other?

If money is no option, proxmox or vsphere?


  • Total voters
    20
  • Poll closed .
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
Hey,

I've got vSphere 7 and keep wondering at least once a year if I should ditch vSphere for something open source. I'm thinking KVM-based, but have my eye on the Illumos/FreeBSD bhyve projects, too - although what I've tried of them has been really rough around the edges.

Also open to Xen, although I haven't tried any Xen hypervisors except for DoM0 on NetBSD about 3 years ago, so I might have kind of a warped view of its capabilities (it was "quaint" read: more proof of concept than anything usable). It doesn't seem like many resources are being poured into it for whatever reason, but I suppose I could be wrong (feel free to dispute me on that).

It's just for my lab. I am particularly interested in something with some easy HA and distributed storage, as I have 3 nodes but not a whole lot of time to set up a solution from scratch, but also no $ for more vmware licenses

Did anyone ditch their vSphere for Proxmox, KVM/LXC, bhyve or similar? Why? And how's it working out?

Thanks :)
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,067
440
83
I have a DIY 3 node vsphere cluster with vsan. in the process of replacing it with a single QNAP TVS-872XT (8 sata drives,64gb memory, 2x800 m2 NVMe SSDs) - Most load to be native tools, containers in Qnap's container station, and as few as possible as virtual station VMs
The idea is to simplify management and reduce power usage.
 

acquacow

Well-Known Member
Feb 15, 2017
795
447
63
43
I can't really vote either way, as I maintain ESX, KVM, and HyperV in my home lab just to have a mixture of things to test things out on.

I'm yet to see any of my customers running proxmox, so it'll be a while before I dig into it.
 

i386

Well-Known Member
Mar 18, 2016
4,388
1,622
113
35
Germany
I'm currently using the vmware stuff (esxi as os, workstation pro as playground on my windows machine).

But as customers at work move more and more towards docker/(azure) kubernetes I'm thinking more and more of running similar setups in my homelab.
 

pancake_riot

New Member
Nov 5, 2021
22
21
3
I dropped ESXi for Proxmox in my lab. That was in the vSphere 6.x days, and I haven't worked with 7 since then so some of this might be irrelevant, but the result was overwhelmingly positive.

Pros:
  • Lots of RAM and storage freed up from no longer having to run vCenter in a VM. Even at the smallest possible cluster size, vCenter can be demanding.
  • Advanced features like clustering and HA aren't hidden behind expensive licensing.
  • Full-featured, built-in web UI with cluster management.
  • Better hardware compatibility for low-power systems.
  • Ceph storage support built-in, whether that's the Proxmox-supplied version or your own Ceph cluster that you maintain.
Cons:
  • Basic network setup is fine, but complex network configurations require wiki pages to understand and configure correctly.
  • Much less documentation and training resources. Not a dig at Proxmox because their Wiki is very well-maintained, but VMware has an entire consulting/training industry built around them and it's a lot easier to find someone who's had your exact problem.
  • Limited applicability to enterprise, since vSphere is essentially the industry leader in on-prem compute.
If your goal is to stay up to date with vSphere, by all means, run that - but if you're more concerned with what you're running on the infrastructure versus the infrastructure itself, Proxmox is a great alternative and considerably more lab-friendly.

Now, since your poll asked "if money is no object", I did vote for vSphere. If money was no object, I'd have a lot more compute capacity than I do now and I could afford the full-fat vSphere licensing and support. :D
 
  • Like
Reactions: AveryFreeman

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,067
440
83
@pancake_riot well, if pricing is a concern then vMUG advantage licensing subscription at $200 per year allowing home lab owners to try any VMWare product/feature isn't a terrible deal.
Agree with you on vcenter resources, at the bare minimum it does need 10gigs of ram, which isn't insignificant in home labs.
Same reason I never went with Nutanix CE due to CVM required resources.

Scale computing has an interesting hypervirtualization product for smaller machines, but afaik they don't have any free/community versions.
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
I have a DIY 3 node vsphere cluster with vsan. in the process of replacing it with a single QNAP TVS-872XT (8 sata drives,64gb memory, 2x800 m2 NVMe SSDs) - Most load to be native tools, containers in Qnap's container station, and as few as possible as virtual station VMs
The idea is to simplify management and reduce power usage.
I totally get the downsizing argument, but what about HA? Don't you want to be able to take a node down and have your services keep operating?
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
I dropped ESXi for Proxmox in my lab. That was in the vSphere 6.x days, and I haven't worked with 7 since then so some of this might be irrelevant, but the result was overwhelmingly positive.

Pros:
  • Lots of RAM and storage freed up from no longer having to run vCenter in a VM. Even at the smallest possible cluster size, vCenter can be demanding.
  • Advanced features like clustering and HA aren't hidden behind expensive licensing.
  • Full-featured, built-in web UI with cluster management.
  • Better hardware compatibility for low-power systems.
  • Ceph storage support built-in, whether that's the Proxmox-supplied version or your own Ceph cluster that you maintain.
Cons:
  • Basic network setup is fine, but complex network configurations require wiki pages to understand and configure correctly.
  • Much less documentation and training resources. Not a dig at Proxmox because their Wiki is very well-maintained, but VMware has an entire consulting/training industry built around them and it's a lot easier to find someone who's had your exact problem.
  • Limited applicability to enterprise, since vSphere is essentially the industry leader in on-prem compute.
If your goal is to stay up to date with vSphere, by all means, run that - but if you're more concerned with what you're running on the infrastructure versus the infrastructure itself, Proxmox is a great alternative and considerably more lab-friendly.

Now, since your poll asked "if money is no object", I did vote for vSphere. If money was no object, I'd have a lot more compute capacity than I do now and I could afford the full-fat vSphere licensing and support. :D
Yeah, VMware has something a lot of other orgs don't - tooonnnsss of monnneeyyyy on a continual basis. They can afford to have documentation - if they didn't, I would think there was something extremely wrong with them.

I'm curious about your network config, could you drill down on that a bit? What's your lab like, do you have a few servers? Any different rates of speed your NICs are running at, for any different purposes? (e.g. backend, front end/UI, etc.)

How did that network config end up working out, are you happy with it / doing the things you wanted it to?

I ask because I have 1Gbps, 10Gbps and 40Gbps NICs between my 3 servers and this is a big consideration for me since I'm not really a network guy - I have a single /24 AD network, I understand basic routing and local DNS/DHCP, but have yet to wade into anything more complex on a manual basis (VLAN, BGP, OSFP, etc.)
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
I can't really vote either way, as I maintain ESX, KVM, and HyperV in my home lab just to have a mixture of things to test things out on.

I'm yet to see any of my customers running proxmox, so it'll be a while before I dig into it.
Nice, who are your customers?

I recently just read a post about benchmarking Hyper-V vs. VMware Workstation (Level 2 wars), apparently Hyper-V kicks the shit out of Workstation in performance metrics, but lags immensely for Linux? I had no idea, and I wonder how the server-grade stuff compares (hyper-v server vs. esxi on an OS workload basis...) Could be a consideration for particular workloads / clients ... (?)

I do definitely run Windows for some stuff, but not enough that I think I'd want to run Hyper-V... but I do kind of have my eye on S2D in some ways. I just don't know.

Have you tried S2D out at all? The HCI Hyper-V + S2D w/ ReFS does look kind of interesting, but not sure I am curious enough about how it performs to set it all up myself...
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
@pancake_riot well, if pricing is a concern then vMUG advantage licensing subscription at $200 per year allowing home lab owners to try any VMWare product/feature isn't a terrible deal.
Agree with you on vcenter resources, at the bare minimum it does need 10gigs of ram, which isn't insignificant in home labs.
Same reason I never went with Nutanix CE due to CVM required resources.

Scale computing has an interesting hypervirtualization product for smaller machines, but afaik they don't have any free/community versions.
What's this other one that came around, Marantis? Is that another of the same kind of thing?

I only know about them because they have a tool to convert VM workloads to KubeVirt containers called Coralis (or something like that) I saw on cloudbase: Migrate VMs to Kubernetes? Sure! Why not? - Cloudbase Solutions

It seems like every time a cow farts or seagull shits, they come up with more names for container platforms...
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,067
440
83
I totally get the downsizing argument, but what about HA? Don't you want to be able to take a node down and have your services keep operating?
We are talking about the home lab, not critical production. planned downtime is acceptable. I also try to get around code stability issues, by
postponing major release updates until a few months later then the release gets more mature after several rounds of patches.
I don't really care to test VMWare vSphere anymore, certainly not then it costs me around $70/month in electricity.
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
We are talking about the home lab, not critical production. planned downtime is acceptable. I also try to get around code stability issues, by
postponing major release updates until a few months later then the release gets more mature after several rounds of patches.
I don't really care to test VMWare vSphere anymore, certainly not then it costs me around $70/month in electricity.
Damn, where do you live?

I guess I'm kind of spoiled because we have the cheapest electricity in the US because of all the hydroelectric and wind near Seattle. Not really an excuse for being wasteful, though. I do try and keep my rig under 500w total, which is hard with 24 SAS drives, but my 3 servers are only single-processor low-power E5s (1.8GHz 8-core 55w TDP, 2.0Ghz 12-core 85w TDP, etc.)

I am working on a routing/firewall device right now that will combine pfSense for transparent firewall plus TNSR for all routing (inter-vlan L3) in an effort to get 840+ Gbps and/or >10 Mpps out of a sub-20w device, an Apollo Lake Asrock J3455ITX-B. That kind of board shouldn't be able to do better than 940Mbps (1Gbps line speed) or around 0.6 Mpps routing with pfSense, but I'm hoping TNSR/DPDK will push packets considerably closer to 10Gbps line speed. It's kind of proof of concept right now, but if it works out OK maybe I can make similar devices for other 2-3 purposes, such as VCSA + file storage + directory servers.

In a someone analogous use-case, the acceleration ZFS ARC provides for disk transactions might substitute for faster CPU speeds on storage platforms. If there's another platform you'd recommend that does disk caching using RAM, I'm always exploring new ideas. I'm kind of curious to compare bluestore + rocksdb for ceph right now, but I'm not sure if it really offers the same kind of throughput (ZFS seems pretty unique in that respect), and I'm kind of thinking higher throughput might be worth prioritizing over distribution for that same reason of being able to use lower-power equipment to provide better results for overall efficiency.

Once NAND becomes considerably more affordable it will dramatically reduce power requirements for file storage, etc... It's coming...
 

pancake_riot

New Member
Nov 5, 2021
22
21
3
Yeah, VMware has something a lot of other orgs don't - tooonnnsss of monnneeyyyy on a continual basis. They can afford to have documentation - if they didn't, I would think there was something extremely wrong with them.

I'm curious about your network config, could you drill down on that a bit? What's your lab like, do you have a few servers? Any different rates of speed your NICs are running at, for any different purposes? (e.g. backend, front end/UI, etc.)

How did that network config end up working out, are you happy with it / doing the things you wanted it to?

I ask because I have 1Gbps, 10Gbps and 40Gbps NICs between my 3 servers and this is a big consideration for me since I'm not really a network guy - I have a single /24 AD network, I understand basic routing and local DNS/DHCP, but have yet to wade into anything more complex on a manual basis (VLAN, BGP, OSFP, etc.)
Until very recently, I had 4 Proxmox VE hypervisors. One was a Dell Poweredge R720 with 2x10Gb, and the other 3 are assorted generations of Dell Optiplex micros each with 1Gb NICs. The R720 hosted my NAS using PCIe passthrough for the HBA to a VM inside Proxmox. I recently moved my storage back to purely physical, so I'm down to the 3 micro hypervisors now.

On the R720, I had the two 10Gb NICs bonded with LACP and configured as a trunk port. Each of the Optiplexes' 1Gb NICs are also configured for trunking. Similar to ESXi, you have a virtual switch (vmbr0) on each host that the VMs attach to and you can configure a VLAN tag on each VM's virtual NIC.

Speed differences between hosts aren't a problem. When doing VM migrations between them, they'd happily saturate as much of the link as they could.

The coolest thing I'm doing network-wise with Proxmox is a MariaDB Galera cluster made up entirely of LXC containers across my 3 hosts. Each container has a public NIC on my main VLAN and a private NIC on a dedicated VLAN for replication traffic. Doesn't change the total throughput at all since it's still traversing a single 1Gb link, but it's at least logically segmented.

There are also two LXC containers running an HAProxy cluster for TCP load balancing in front of the Galera cluster. Those containers are stored on Ceph and managed by the Proxmox HA cluster.

Proxmox LXC containers are another big pro that I failed to mention before. They're super lightweight and aside from sharing a kernel with Proxmox, you can run a totally separate OS without all the overhead of virtualized hardware.
 
Last edited:
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
Until very recently, I had 4 Proxmox VE hypervisors. One was a Dell Poweredge R720 with 2x10Gb, and the other 3 are assorted generations of Dell Optiplex micros each with 1Gb NICs. The R720 hosted my NAS using PCIe passthrough for the HBA to a VM inside Proxmox. I recently moved my storage back to purely physical, so I'm down to the 3 micro hypervisors now.

On the R720, I had the two 10Gb NICs bonded with LACP and configured as a trunk port. Each of the Optiplexes' 1Gb NICs are also configured for trunking. Similar to ESXi, you have a virtual switch (vmbr0) on each host that the VMs attach to and you can configure a VLAN tag on each VM's virtual NIC.

Speed differences between hosts aren't a problem. When doing VM migrations between them, they'd happily saturate as much of the link as they could.

The coolest thing I'm doing network-wise with Proxmox is a MariaDB Galera cluster made up entirely of LXC containers across my 3 hosts. Each container has a public NIC on my main VLAN and a private NIC on a dedicated VLAN for replication traffic. Doesn't change the total throughput at all since it's still traversing a single 1Gb link, but it's at least logically segmented.

There are also two LXC containers running an HAProxy cluster for TCP load balancing in front of the Galera cluster. Those containers are stored on Ceph and managed by the Proxmox HA cluster.

Proxmox LXC containers are another big pro that I failed to mention before. They're super lightweight and aside from sharing a kernel with Proxmox, you can run a totally separate OS without all the overhead of virtualized hardware.
Sounds cool. I picked up a pile of 5 7050 micros last year for like 50 bucks, I've been slowly filling them out with CPU + ram (have 3 going so far), was going to see if I could conceivably use them instead eventually, but at first they'll just be for experimenting:

e.g. can I run what I need to on such low power machines (I do a lot of video recording on VMs)? Does it make sense to try and run a cloud orchestrator like OKD or Harvester bare metal with a handful of KubeVirt VMs, or is that still too bleeding-edge?

It's hard to get over the slow interconnect and limited storage, though. With between 8 to 16 3.5" drive bays and 1, 10 and 40GbE on my "real" servers, I'm not quite ready to downsize to only 1 NVMe, 1 2.5" SATA and a single 1GbE.

But maybe if I outboard the storage. I was looking into the possibility of running an x4 PCIe card off the Micro's NVMe slot, which would kind of suck not to have NVMe, but at least I could get some 10GbE action going (or some crippled 40GbE) and then run another server just for storage.

Is that what you mean when you say you "recently moved my storage back to purely physical", or are you back to collecting spiral notebooks, VHS tapes, and record albums?

How'd you manage to get two NICs on each of your Micros?
 

pancake_riot

New Member
Nov 5, 2021
22
21
3
Is that what you mean when you say you "recently moved my storage back to purely physical", or are you back to collecting spiral notebooks, VHS tapes, and record albums?
Yeah, I meant having a main storage server where the OS is not virtualized. That's originally what I did with the R720 until I P2V'ed the OS so I could install Proxmox and run some other VMs on the same hardware. That was a few years ago, when my only other hypervisor was an Intel NUC.

How'd you manage to get two NICs on each of your Micros?
They're each only a single NIC, but I've toyed with the possibility of adding a USB3 1Gb Ethernet adapter. 1Gb isn't super limiting for the workloads I run. I'm not doing any media encoding, streaming, etc on them. The most they probably do to max that link is nightly VM backups and a rare VM migration during the day.

I seem to recall a few Lenovo micros that allowed for a PCIe NIC to be added, but those have some drawbacks as well in that you have to give up the 2.5" drive bays.
 
  • Like
Reactions: AveryFreeman

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
43
Near Seattle
averyfreeman.com
Yeah, I meant having a main storage server where the OS is not virtualized. That's originally what I did with the R720 until I P2V'ed the OS so I could install Proxmox and run some other VMs on the same hardware. That was a few years ago, when my only other hypervisor was an Intel NUC.


They're each only a single NIC, but I've toyed with the possibility of adding a USB3 1Gb Ethernet adapter. 1Gb isn't super limiting for the workloads I run. I'm not doing any media encoding, streaming, etc on them. The most they probably do to max that link is nightly VM backups and a rare VM migration during the day.

I seem to recall a few Lenovo micros that allowed for a PCIe NIC to be added, but those have some drawbacks as well in that you have to give up the 2.5" drive bays.
Yeah, I got these micro 7050s so extremely cheap I'm probably not going to bail on them any time soon. There's definitely options ala eBay to get weird Chinese adapter boards - my ultimate preference would be to use the NGFF slots for expansion so I can keep the 2.5" and NVMe, but I believe it's PCIe x1 2.0, which is a max 4Gbps - still not bad for shoe-horning in a single port 82599 NIC, crossing your fingers and hoping for the best. But with my luck, all the conversion necessary would cripple it further - all I've been able to find is NGFF to mini-PCIe and then mini-PCIe to PCIe adapters, nothing straight-across, so it's quite a chain of uniquely niche garbage.

At least if I'm using some of those PCIe risers that use USB cables I could affix the NICs to the floor of this 4U rack box I'm using - the Micros fit inside it sideways perfect, and 5 would go across with plenty of room for a NIC with each one. You have to leave the bottom of the case on since there's no CPU retention mechanism without it (and the retention hole pattern is extremely sub-standard), but with the tops removed it helps deal with the awful thermal performance they have stock. I got a 3U plate with a bunch of holes that line up with standard 12cm fans for ~$20 at sweetwater.

You don't have any sort of distributed PSU for your Micros, do you? I have used 18 to 24-bank 12v 1A DPSUs for those old RG59 TVI cameras extensively, but can't seem to find any that come in 19-20v...

Edit: I found an adjustable PSU that shows promise, it's 20A, each computer works out to about 3.5A each, so might just be barely enough for 5: https://smile.amazon.com/Adjustable-DROK-110V-220V-Switching-Transformer/dp/B08GFSVHLS

In the past, rather than soldering each wire together, I just slapped these bad boys over the splice and hit 'em with a paint stripping gun - it's requires a lot less precision, and you get solder and shrink tubing at the same time: https://smile.amazon.com/Connectors-Qibaok-Electrical-Waterproof-Automotive/dp/B08P7M28R3
 
Last edited: