Proxmox VE 5.0 beta

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I had to update something in the data center today and wanted to post this. Proxmox VE 5.0 beta1 released!

@PigLover looks like Ceph is finally getting an update!

We are proud to announce the release of the first beta of our Proxmox VE 5.x family - based on the great Debian Stretch.

With the first beta we invite you to test your hardware and your upgrade path. The underlying Debian Stretch is already in a good shape and the 4.10 kernel performs outstandingly well. The 4.10 kernel for example allows running a Windows 2016 Hyper-V as a guest OS (nested virtualization).

This beta release provides already packages for Ceph Luminous v12.0.0.0 (dev), the basis for the next long-term Ceph release.

Whats next?
In the coming weeks we will integrate step by step new features into the beta, and we will fix all release critical bugs.

Download
Download ISO installer, service packs, and software documentation for Proxmox products

Alternate ISO download:
Index of /iso

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Can I upgrade a current beta installation to the stable 5.0 release via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Can I install Proxmox VE 5.0 beta on top of Debian Stretch?
A: Yes, see Install Proxmox V on Debian_Stretch

Q: Can I dist-upgrade Proxmox VE 4.4 to 5.0 beta with apt dist-upgrade?
A: Yes, you can.

Q: Which repository can i use for Proxmox VE 5.0 beta?
A: deb Index of /debian/pve stretch pvetest

Q: When do you expect the stable Proxmox VE release?
A: The final Proxmox VE 5.0 will be available as soon as Debian Stretch is stable, and all release critical bugs are fixed (May 2017 or later).

Q: Where can I get more information about feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

Please help us reaching the final release date by testing this beta and by providing feedback.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.
 

Eric Faden

Member
Dec 5, 2016
98
6
8
41
Is it better to do a clean install or can I upgrade an existing node? I have a two machine cluster in my home lab.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Is it better to do a clean install or can I upgrade an existing node? I have a two machine cluster in my home lab.
Is it single or clustered? I did do a single node (non-clustered) and the upgrade worked fine.
 

McKajVah

Member
Nov 14, 2011
59
10
8
Norway
Looks like they have released some updates today. Hopefully it should fix some of the early bugs (like auto startup and shutdown of VM's).

Will try it out later.

Remember you have to put in "download.proxmox.com/debian/pve stretch pvetest" in your repository.

edit:
Did the update and VM's are starting and stopping as normal now.
 
Last edited:
  • Like
Reactions: Patrick

jfoor

Member
Feb 4, 2017
81
20
8
36
I've never understood what people love about Proxmox. Not saying there's nothing to like..I just haven't seen it yet.

What does it have that's better/different than using a FreeNAS box for storage + ESXi or XenServer for VMs?

Is it just a different flavor so to speak or does it have additional features?
 

poutnik

Member
Apr 3, 2013
119
14
18
For me it's the underlying Debian in Proxmox. It improves manageability (read versatility) and lets me tinker more with the base system.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Reasons to like/prefer over ESXi:
- KVM/LXC based. KVM is taking market share from VMware and is much more flexible
- Totally open source, I can look inside to see how it works.
- Totally unlicensed to use, pay for support
- Linux/Debian/Ubuntu baseline OS - I can freely add things like monitoring agents, etc.
- Reasonable integration with related products (Ceph, etc.) without VMware's ridiculous license costs

Reasons to like over Xen:
- Are there really any reasons to like Xen?
- Biggest "real" reason: Xen's IO model guarantees poor IO performance (long story, not fixable without major surgery on the Xen core design)
 
  • Like
Reactions: T_Minus and Patrick

jfoor

Member
Feb 4, 2017
81
20
8
36
Thanks for the replies! I too would prefer to have a accessible Debian base OS, open source etc. I had been using XenServer at home to run 10 or so VMs because of VMware's prices. I hadn't realized Xen was so much worse with IO :( It's worked for my relatively non demanding home use and I've not benchmarked anything to see what kind of numbers it's capable of.

Might have to check out ProxMox!
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
The IO problem with Xen is a pretty basic design issue. Both KVM and VMware use a kernel hierarchy, with all devices connected to VM host kernel and all VMs subtending that kernel. The primary difference between them is that VMware provides a "thin" kernel with just enough to manage the VMs while KVM is embedded in a "full service" host kernel. But in both cases IO transactions go VM<->kernel<->device. (I know virtualization purists will quarrel with this description - but its close enough to make the point).

In cases where direct access to the IO is needed, both VMware and KVM provide passthrough methods where the VM is given direct access to the device (either with or without SRiov).

Xen's "domain" model is a bit different. Xen builds up VMs on top of its kernel in what it calls "domains". In Xen's model, "domain zero" (or dom0) is a "special VM" and all of the environment management and IO is handled by dom0 in quasi-user space. All IO devices are essentially "passed through" to dom0, and the IO transaction for a VM goes VM<->kernel<->dom0<->device (or worse - if you are using XAPI on Linux, VM<->kernel<->dom0<->kernel<->device), resulting in an extra hop across the context boundary between kernel mode and user mode. For most general purpose IO this is of limited impact, but for high performance IO or "small transaction" IO (like small packet network IO for VoIP) it can be devastating. Again - design purists will quarrell with my description, but it is "correct enough" to get the point across.

Note: this description of Xen's IO model is at least an 18 months old understanding. The last time I looked "in depth" they were prioritizing getting the Xen kernel itself up to 64-bit operation in order to support newer 64-bit only devices. They had proposals in play to "fix" the IO overhead problem. Full disclosure is that i gave up on Xen a while back and haven't checked on progress in newer releases.
 
  • Like
Reactions: jfoor and Patrick

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Let me clarify one other small detail about Proxmox.

My real preference is for KVM for hosting VMs. Proxmox is simply a convenient vehicle to mange KVM in a small-scale datacenter or lab.

In larger/more complex environments a full cloud suite is better (e.g., OpenStack). But the overhead and complexity of running Openstack for a small lab stack is just silly (though getting better as Openstack matures and with container based packaging from people like Canonical).
 
  • Like
Reactions: Marsh

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
On that note, I do also think that KVM has a lot of momentum and much of the Proxmox KVM learning you may do will directly translate into managing KVM elsewhere.

There are a few other nice things from Proxmox. One is that they have a working and easy to use installer for using ZFS as the root filesystem.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
Couldn't agree more.
I tried OpenStack for my 4 nodes , just too much headache to keep it running.
 
  • Like
Reactions: Patrick

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
The ZFS installer and direct support for ZFS in general is very nice to have. Even on a backup machine that has a single container, I used Proxmox just for that. ZFS just worked. And having it live in a container means migrating it might be easier to deal with later. I also like the web based console for quick checks on array health etc., though more often than not I use SSH for that.

For the real server, I like running ZFS on the host and pushing directories into containers. It segments things out, and performance is still good as there aren't a lot of translation layers and VM layers in the way. That's less of an issue now with VirtIO and VMware drivers, but there's still more overhead with KVM/ESXI full VMs.

I got bit by the Xen overhead a few years ago. It didn't sound bad till I started loading the system down more, then performance tanked for any I/O tasks.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Reasons to like/prefer over ESXi:
- KVM/LXC based. KVM is taking market share from VMware and is much more flexible
- Totally open source, I can look inside to see how it works.
- Totally unlicensed to use, pay for support
- Linux/Debian/Ubuntu baseline OS - I can freely add things like monitoring agents, etc.
- Reasonable integration with related products (Ceph, etc.) without VMware's ridiculous license costs

Reasons to like over Xen:
- Are there really any reasons to like Xen?
- Biggest "real" reason: Xen's IO model guarantees poor IO performance (long story, not fixable without major surgery on the Xen core design)
Interesting notes on Xen. Doesn't AWS run Xen? How for they get around poor IO?

Edit: just saw Piglovers response.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Interesting notes on Xen. Doesn't AWS run Xen? How for they get around poor IO?
I do not believe AWS runs Xen. Can't confirm what they do run.

Check that - a quick google answers. Yes - it is Xen. But not Xen as you and I can use. They have tweaked it heavily to address performance and security issues - and do not disclose what they have done.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I'm drawn to ProxMox because as @Patrick said the ZFS filesystem for local storage is baked in. This means depending on the # of drives you may 100% avoid a HBA, unlike running VMWARE + FreeNAS or Napp-IT for an all-in-one where you MUST have a HBA to pass through... either onboard or PCIE they do use more power and if you're going to have 4+ systems the power adds up fast. I also mainly deal in Linux so Proxmox seems to make more sense, and overall more financial sense too.

Still playing around with Proxmox myself, but so far I'm liking it.... not nearly as intuitive as vmware, and for sure not nearly as many resources on the web but it's growing fast.
 

OBasel

Active Member
Dec 28, 2010
494
62
28
Still playing around with Proxmox myself, but so far I'm liking it.... not nearly as intuitive as vmware, and for sure not nearly as many resources on the web but it's growing fast.
if they bolted on Docker a ZFS storage gui for parity with ceph youd be in nirvana.