Free Virtualization operating systems

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

macrules34

Active Member
Mar 18, 2016
407
28
28
40
Hi,

I'm looking for a free virtualization operating system, I have tried ProxMox but I continuously have problems (isn't stable enough) with it and would like to try something else.

I have 4 nodes with the following specs:
- 2x 4 core CPUs
- 16GB of ram
- 1x 128GB usb boot drive
- 2x 1TB HDD
- 1x Emulex LPe11000 (single port, would like to have all VMs on a single SAN LUN from each node)
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
So you haven't really stated what you are looking for from the virtualization OS, other than the obvious of course.
Since you posted this thread in the Linux forum, can we assume you are looking for something based on Linux?

In any case there are MANY options that may be usable, here are some, in no particular order:
  1. Hyper-V server 2019
  2. XCP-NG
  3. Openstack RDO
  4. Debian/Ubuntu using virt-manager
  5. Centos using virt-manager (deprecated) or cockpit
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
Oh and another thing, are you sure that the stability issues are due to the software?
I get that it may be worth trying something else, but obviously if the issues lie in hardware or firmware, a new OS will not help you.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I'm curious what stability issues you found and whether it might be more fruitful to debug/repair them? As @RTM says, if they are issues with your underlying platform then going to another virt manager won't fix them. Also, Proxmox is widely used and has an active use base who are not afraid to air it out in their discussion forum - it is not evident that Proxmox itself has general stability issues. The forum you are posting on (STH) happens to run on Proxmox and it is a very stable place indeed (though I sometimes question whether all the people who post here are :)).

You may have a simpler road trying to fix what you have - and in the end you are more likely to take away valuable learnings doing this than you would just dropping in another cloud manager.

Lastly, I think you might get better answers to your original question about possible alternative platforms if you described a bit about the apps you are trying to run on the it. Only knowing the hardware you are running on leaves a lot of what you are trying to achieve to the imagination of the people responding.
 
Last edited:

macrules34

Active Member
Mar 18, 2016
407
28
28
40
The most recent problem is that I had to reboot a node and the VM on that node won’t boot now. I’m not in my office so I’ll get the error tomorrow. Also the operating system doesn’t seem to like storage with multiple paths.
 

schmookeeg

New Member
Mar 25, 2016
27
14
3
47
Alameda, CA
www.msxpert.com
i took a tour of what was out there a few months ago. I prefer my VMs to 'just work' and if they misbehave, I don't want to sift through months of logs to find out why. A yellow exclamation point in the windows client is easier. :D Infra isn't what pays my bills.

I've had no problems with VMWare or Xen at my colo.

Citrix went full retard recently on their licensing, so I've been evaluating XCPng. I must say, it's been rock-solid. I 'think' more like xen than vmware and prefer it for the lower cognitive load. I'll be using it at the colo with my next hardware refresh. I have no idea if this matches your needs OP, so just anecdata for you.

$0.02

- Mike
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,512
5,800
113
The forum you are posting on (STH) happens to run on Proxmox and it is a very stable place indeed (though I sometimes question whether
First off, I totally agree. We probably have 40+ Proxmox hosts between the lab, the STH hosting cluster, and even my home. There is zero chance I would use it in our hosting cluster these days if I thought it was not stable.

Second, @PigLover as I was reading that I thought it was going to be about how I am the weakest link in our hosting cluster.

At the end of the day, Proxmox uses KVM. KVM is what AWS and others use. It is the most widely used virtualization technology by a wide margin. My sense is that you have something strange in your setup if that is happening.
 

macrules34

Active Member
Mar 18, 2016
407
28
28
40
The only thing that I can think of is that I have storage LUNs connected to each server from an EMC AX4-5F. Is there a location that I should look at for the log files for each VM?
 

macrules34

Active Member
Mar 18, 2016
407
28
28
40
I went to start the VM and it booted up fine. I wounder what causing the problem. Are there any logs that would have the reason for the VM no booting?
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
So... out of curiosity I did a bit of googling to determine where Proxmox stores logs, but alas the only thing i could find was this article:

In short, they suggest you look into the /var/log/syslog file if you have issues booting a guest vm.
 

macrules34

Active Member
Mar 18, 2016
407
28
28
40
Once again I have had a problem with a ProxMox server (don't think its a ProxMox issue tho). I get a continuous scrolling of an error message that says "CIFS VFS: send error in sessionsetup = -13", any idea what might be causing this? This is the second time that this has happened to one of my ProxMox servers, last time a reboot of the node fixed it. But this time I'm having trouble booting the node (probably unrelated to ProxMox).
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
It sounds like you are using SMB to connect to your SAN, while I am not saying it will not work, but I might also expect worse/functionality/stability with that over something like NFS or iSCSI. I am no SAN expert by any means, so I can't tell you whether or not it would be a good idea or not to use some other protocol, but instinctively it sounds like a good idea to me.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
You might want to try free nutanix CE edition - it's using AHV hypervisor, based on KVM. A free edition is limited to 4 nodes and to be honest, you need a bit beefier hardware to run it - 32gb+ ram would be highly advised and 200gb or larger SSD drive in each node for writes.
 

WANg

Well-Known Member
Jun 10, 2018
1,307
971
113
46
New York, NY
It sounds like you are using SMB to connect to your SAN, while I am not saying it will not work, but I might also expect worse/functionality/stability with that over something like NFS or iSCSI. I am no SAN expert by any means, so I can't tell you whether or not it would be a good idea or not to use some other protocol, but instinctively it sounds like a good idea to me.
I ran Proxmox clusters before mapping back VM disk image data stores to Netapp filers via NFSv3 - it’s not like you can’t run VMs on it - I also ran ONTAP 7-mode where I had stuff shared on CIFS (just not VM disk images) on the same filer automapped as data directories on VM images shared out via a different mountpoint on NFSv3 (It’s a trading firm where our back office stuff is run on a Windows laptop and reading it via a shared directory, which needs to pull analysis run on Linux VMs). A well managed NAS/SAN should work equally well attached on either Linux (Proxmox), Windows Server (HyperV) and/or VMWare VCenter.

however, there are certain gotchas you need to be aware of -

a) If it’s CIFS, make sure your CIFS/SMB server is sane, and that you won’t get tripped up on permissions. If it’s NFS, make sure that the exports are sane, and for iSCSI, the target/initiatior is correct. The smoking gun log issue is usually not on the VM host side, it’s on the SAN/SAN side.

b) if your server has multiple VLANs and NICs, make sure that they have sane network configurations - I remember a failover for a SAN fail 25% of the VMs in a cluster because someone put a /23 in a network interface net mask instead of a /22. As a corollary, never assume that the network connectivity to the SAN/NAS is fault free. If it glitches out or drops packets you will see intermittent mount/read/write issues.

c) on the VM datastores, make sure you don’t have stale lock files, or something happened and the NAS/SAN went into read-only for that datastore mountpoint. I remember a QSFP module flipped out in my gig, the SAN freaked out and all of a sudden 16 VMs locked up. It has happened before. When in doubt, ssh into the hypervisor, get into the data store location and see if you can touch a test file. If it just sit there, it’s probably the storage acting up.
 
Last edited:

RedneckBob

New Member
Dec 5, 2016
9
1
3
120
You might want to try free nutanix CE edition - it's using AHV hypervisor, based on KVM. A free edition is limited to 4 nodes and to be honest, you need a bit beefier hardware to run it - 32gb+ ram would be highly advised and 200gb or larger SSD drive in each node for writes.
They first hit is always free :oops:
 
  • Like
Reactions: tjk

macrules34

Active Member
Mar 18, 2016
407
28
28
40
/dev/zvol/SAN_Superbird is a ZFS volume created on a SAN LUN assigned to a VM.

My set-up consists of 4 servers, each server has 2x Intel Xeon L5420 CPUs, with 16GB of RAM and 2x 1TB hard drives (used in a Ceph cluster to host the VMs) and a 128GB USB stick for the boot drive. Each node has a 60GB SAN LUN attached, which is multi-pathed.

Looks like the node dropped out of the cluster again. Had to reboot and know unable to boot ever since.
 
Last edited: