Proxmox VE

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I've been playing with Proxmox VE for the last few days. I'm starting this thread to discuss thoughts about it.

Why not stick with VMWare/ESXi?

I've used ESXi free for the last couple of years and have liked it. So why change now? Mainly - licensing. After buying the C6100 discussed elsewhere here I've crossed the 32GB boundary - so the free version won't do. I've also become interested in clustering/HA features (live migration) that only come with licensed versions of VMWare. Licensing my 4-node, 8 CPU cluster and doing it at a level that includes the advanced features is simply cost prohibitive for me. Also, since I'm basically running a lab, it offends my finer sensibilities that VMWare does not have a low-cost license option for non-commercial use. Heck - even the silly folks in Redmond offer Technet and MSDN options!

Why Proxmox VE?

Well, why not. I could just as easily done Xen or one of its derivatives. But about reading features and discussions from other users I just thought Proxmox VE seemed like the right fit. No science here at all. Besides - Nitro's doing Xen and we'll get good writeups from him on that. Here we can discuss another option.

What is Proxmox VE?

Proxmox VE is an open-source project bringing a coherent framework and tools built around two existing Linux virtualization concepts - KVM for full virtualization and OpenVZ for "container" virtualization. Proxmox VE is supported by a commercial support organization providing consulting and management for the project (Proxmox), while the code itself remains open source, GPL and free. This is similar in concept to how Red Hat and SUSE support Linux (except that Proxmox is a direct sponsor of the entire project while Linux itself is larger than any of its support companies).

Where to learn more about Proxmox VE

Proxmox VE start page here, Wiki here and downloads here.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
First Impressions

I'll be adding more posts over the next few weeks. As a first installment here are my initial impressions.

Installation was very, very simple. Grab the ISO. Put it into a DVD drive (or if you have IPMI use the remote ISO mount). Boot. Its actually a very fast installer. It requires that you have an installation disk attached and it will use the entire disk for its own purposes with few/no options. Good and bad.

I was surprised to find that it required static IP addresses to be assigned. More on that later (networking will be a whole topic of its own).

After the install is complete and the system boots you'll get a simple Unix login prompt. More importantly, the platform will be running a web server to manage the virtual environment. From the web interface you can add storage, manage networking, view performance info for the server and any of its VMs, etc. In spirit the interface is very much like vSphere/vCenter from VMWare. Unlike VMWare, however, you do not need a specialized client to access it - everything is there on the web interface.

Well...almost everything. One of the biggest disappointments I have so far with Proxmox VE is that a lot of tasks to manage the virtualization environment have to be done in the VM Host OS (Debian) using command line. I'll go over networking and clustering in another post - but the summary version is that you have to have a good deal of Linux expertise to get things set up and managed. More specifically, an especially for the networking part - you have to have some expertise in Debian's rather unique way of doing things.

VMWare has a command line interface too. The difference is that with VMWare you can do 98% of management functions and never log into the ESXi server for the command line. You have to be doing some rather advanced things before you need it - and many of those things that require the command line in VMWare are thing they advise against doing anyway. With Proxmox VE, however, its pretty much mandatory.

On to better things - clustering, managing storage, networking and - of course - creating and managing VMs (including migration from VMWare - or perhaps failed attempts at migration).
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Managing Storage

Proxmox VE uses a storage model similar to VMWare "datastores". When you first install a Proxmox VE and connect to the web interface you will see that you have a storage area called "local'. This refers to a VVM on your boot disk that is built as part of the initial install.

To add additional storage you simply select the server in the web interface and go to the "storage" tab. Select "add" and add the storage element you want. Proxmox VE supports local storage on LVM groups or folders, shared storage using NFS, and iSCSI targets. All of these can be configured directly from the web interface.

When you define "storage" in Proxmox VE you also define what types of items are stored in each place: VMs, ISO images, Backups, etc. This limits the types of items that Proxmox VE will store in each storage group. When clustering, however, it is useful to group together all places where a particular type of item is stored.

When clustering (later) and you define a network based storage method that datastore automatically becomes visible to the entire cluster. By doing this you can easily create a common storage architecture across the entire cluster: you define an NFS-based datastore called "VMs" and then you can easily "migrate" that VM on any node in the cluster and it will run. Proxmox VE also supports "live migration" of a running VM that uses an NFS-based datastore (with certain restrictions on how that VM is configured).

Finally, you can even support SMB/CIFS based shares for datastores. The method is somewhat crude and works because you can use any "folder" on the Proxmex VE host OS as a datastore. To use an SMB/CIFS share you go to the host OS and mount the datastore from the command line (set up /etc/fstab so that it re-mounts on boot) and then create a datastore using the mount point of the share. Crude - but it does works.

For more information see here.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Networking

Proxmox VE uses a very simple networking scheme based on Linux "bridge" devices.

Basic Networking

A Linux "bridge" is used to connect multiple network interfaces together. Traffic that is transmitted from any of the interfaces in the bridge is send out on all interfaces of the bridge. Any traffic received on one interface in the bridge is re-transmitted on all of the others. A bridge is similar in concept to an Ethernet switch.

By default, Proxmox VE creates a single bridge "vbr0" and attaches the first Ethernet interface, usually "eth0" to it. Proxmox also activates the static IP address you assigned to the host on this bridge.

The downside of the Linux Bridge

Unfortunately, a Linux bridge is not a real efficient thing. Unlike the switches we use in our physical network, there is no MAC learning or filtering done in a Linux bridge. All of the repeating/retransmission is done in the Linux kernel - which means in the CPU of your VM host. The physical interfaces connected to the bridge need to run in "promiscuous", which listens to ALL traffic on the link - even traffic not destined for your host or one of its VMs - and send all of that traffic up to the kernel for processing/filtering rather than handling this function in the NIC hardware.

And - perhaps most costly of all - most of the protocol offload capabilities of your NICs have to be bypassed. In modern NICs, especially server NICs, most of the IP and TCP stacks are implemented inside the NIC and this low-level protocol processing happens without creating overhead in your CPU. When you attach an Ethernet interface to a Linux Bridge, however, these functions generally get disabled.

When comparing to VMWare's 'vswitch' the Linux bridge model is quite primitive. Also, vmware has worked with several standards groups - and directly with both Intel and Cisco - to ensure that there are hardware virtualization features available in most high-line server NICs that they use to ensure a NIC serving a vswitch retains the ability to offload most of the protocol processing to the NIC.

Networking is an area that VMWare still retains a huge advantage over Proxmox VE (and other kernal-based virtualization environments like Xen/XCP).
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Migrating VMs from VMWare

Reserved to discuss migrating VMs from VMWare (or perhaps failed attempts at doing this...)
 

Mike

Member
May 29, 2012
482
16
18
EU
Webgui's are for guurls. Configuring the host by CLI can be seen as slightly harder from a Vmwarez perspective but on the other hand let's face it; yer not going to virtualise a server 2012 farm on proxmox, so knowledge of the cli is needed for guests anyway. You might want to dive in a bit deeper on the difference between containers and full virtualisation. Also on KSM, although not strictly being used for KVM, it allows for greater efficiency with your vram. SPICE display protocol and the planned features of it. Good iommu support.
Ok, ok, I sound like a mac user but for what it's worth; i also have vcenter and esxi licenses

And on the Debian bit, Redhat user are thou? :rolleyes:
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I have been using proxmox for a couple of years, and am working on getting off it onto xenserver. My two biggest complaints: it runs a heavily patched kernel (to support openvz plus whatever they feel is needed) - this makes it a PITA to install your own kernel. Also, it seems like a very small staff, and I found it off-putting how dismissive they seem on the forum when someone makes a suggestion/helpful criticism. Just my 2 cents worth...