Linux KVM Virtualization - Is it mature enough now?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

veggiematicpc

New Member
May 5, 2017
4
0
1
Hi STH. It's my first post here so be gentle plz!

I've spent a few hours reading up on KVM this week. I've typically run ESXi and Hyper-V for my clients virtual machine servers. Typically their servers use 128-256 GB RAM installed, and run 60% utilized with 20-40 VMs. They can be directory servers, DNS, Woocoomerce sites, you name it.

Working in the SMB IT space, everyone's always pressuring us about license costs. Windows Server 2012 R2 clients are looking at 2016 and saying no thanks. ESXi starts to get costly once you've gone from 1 to many servers.

I've been reading about Xen (AWS) and KVM (Google) this week. I was first totally into Xen since AWS is huge and they're using it. The more I read about Linux virtualization, the more I'm realizing that KVM has a bigger community install base outside of AWS. And AWS doesn't really share.

I don't want to recommend KVM as an option if it's going to cause us support headaches. Saving a few hundred or thousand dollars for a client sounds great, but if I've gotta show up or fix constantly, I don't want to do it.

Is KVM stable yet? Can you get 90, 180, 365 days of uptime?

If I start with RHEL or CentOS can I move VMs to Debian later with KVM?

How's isolation and security?

Can you run Windows VMs well? Reading older posts seems to say Windows on KVM was bad. Now it's sounding better but you've gotta install drivers into the VM?

I know they're NOOBer questions but STH seems balanced where there are evangelicals everywhere else. I posted a similar question on a VMware group and I got the WHY WOULD YOU EVER CHANGE?!?!? At the same time I'm seeing all the VMware enterprise guys start ditching for AWS and clients looking to lower costs.

If my clients go to AWS I'm losing hardware and software revenue. If they go to KVM I can still sell hardware and hosting. I did the numbers and I can save them money and keep our revenue okay with KVM. We need cash flows to keep our banks happy which is why I'm scared about AWS if the customer pays directly. Profit is the same but we need to keep revenue going.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
KVM is solid for sure, uptime is more related to required patching but if in a cluster like hyper-v or ESX just move the VM's around.

What is the issue with licensing exactly ? The extra features ? Hyper-v has the free version but if you run a lot of windows then hardly relevant since it's the license required to run the VM's that's the issue.

I think it also depends what you run...

Lots of windows VM - hyper-v or ESX
Mixture of different OS VM's - ESX
Lots of linux VM's - KVM

Of course you could run all windows VM's on KVM for example but does it really make sense ?? Not much, and reverse, does it make that much sense to run mostly linux VM on hyper-v.

If what your really trying to do is to move from mostly windows to mostly linux to save license costs then KVM is a good idea but keep in mind it's not the most simple platform.

Take a look at proxmox
 
  • Like
Reactions: Patrick

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
KVM is rock solid. There are some very major deployments using it in private cloud. Can easily go year+ between restarts as long as you plan patching correctly.

If your issue is moving away from licensing costs then I'd avoid RedHat. When you add it all up they are as expensive as VMWare or Hyper-V. Go with the Debian/Ubuntu packaging and then if you need it you can buy the support you need from Canonical (or even third parties). The only thing "licensed" in their platform is Landscape and highly likely that you won't want or need that.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Echoing the above, I have KVM VMs on my home network that are up 5-6 quarters. KVM is a very valid virtualization option and is not a niche. I am not worried that 12 months from now KVM will be extinct.

You do need the virtio integration DVD for Windows. When we did this guide with Windows Server 2012 R2 Intel Optane Memory: Pass-through to VM with full performance we had to use the driver DVD to get just about everything working.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Ya - KVM is super solid. Not just in lots of private deployments, but backing lots of major cloud players as well.

It's weakness is on the front-end, managing KVM can be more of a pain.

For a small-scale, easy-to-use deployment, you could maybe look at Nutanix community edition. Their acropolis hypervisor is just KVM with some nice web UI on top, of course plus their storage stuff that is common across Nutanix on VMware/Hyper-V/etc.
 

awedio

Active Member
Feb 24, 2012
776
225
43
You can't use Nutanix CE for production, it's for labs/POC types only.
 

veggiematicpc

New Member
May 5, 2017
4
0
1
Thanks ya'll!

I'm fixin' to give it a go with Ubuntu this weekend. I noticed many guides online with Ubuntu and KVM.
 

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
Licensing costs have certainly become an issue in SMB.

KVM is rock-solid and very efficient (hugepages, virtio net/disk et al.) thanks to the work in Linux done by Redhat and others. You should use it through libvirt though, because management of VMs is so much easier this way.

Make sure you learn how to optimize Linux or Windows guests properly. I.e. you should use everything that says virtio on it, especially network and disk. For Windows the USB tablet used to be a CPU guzzler, until support for USB 3.0 via NEC-XHCI came along. Exclusively use that and you will be happy. The SPICE people have also made significant inroads lately, making using a VM using virt-viewer on a Linux desktop viable and a little "VMware Workstation"-like even. Efficiency-wise you know you are done when the VM is showing 1-3% CPU usage on the host. Finally don't use BTRFS, use ext4 with qcow2 and be done with it.

For anything 2+ VM hosts I still would use ESXi in one of their cheaper essentials kit, simply because the tools are much more integrated and mature to do stuff like VMotion or storage VMotion. I.e. for anything with high-availability in mind.

Even more importantly, one has to consider the issue of backup. Because sooner or later, you will need it. A secretary deleting the wrong file, a windows update turning the OS into an unbootable mess, etc. Now with KVM you have to get comfortable with scripting and be prepared for a steep learning curve. Take a look at this: shogun/libvirt-imagebackup Can you imagine using it? Maybe even with borg-backup and its fantastic deduplication and son-father-grandfather schemes. Then things are go for KVM. If not, you have to look into things like Veeam. Or, you do in-VM snapshot backup using Acronis or similar, also a possibility.

There are many VM stack solutions like Proxmox, oVirt, OpenNebula, and a ton more, but I personally would not use them in production in SMB environments, because they usually introduce more layers and that incurs potential for failure (even VM-failure) and also widens the security attack surface.
 

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
I am using USB passthrough with KVM+qemu+libvirt using <controller type='usb' index='0' model='nec-xhci'> and via <hostdev mode='subsystem' type='usb' managed='no'>... which bridges a USB 2.0 device with a certain vendor and device ID into a Windows 7 VM using a NEC µPD720200 driver there. Works really well with little CPU.

GPGPU is a different story. On x86 there is three noteworthy GPU manufactures: Intel, NVidia and AMD. Intel is good for office PCs and using their iGPU for video decode and the light stuff (including games 5+ years and older, which is not too shabby for the extra TDP that uses in the uncore part of the chip), but imho no one is passing that on into a VM to do GPGPU stuff.

Then there is NVidia. They make great chips but would I personally use them in a KVM context or on Linux generally? No. First their firmware policy sucks, then their driver will even go so far as to actively look around to determine, if you try to install or use it on a VM -- and quit working if you do. Yeah, thanks for nothing.

Which leaves AMD basically. If you have a modern Linux distribution and look at e.g. PCI passthrough via OVMF - ArchWiki there is actually little work needed to virtualize your GPU. Basically make sure to use a recent kernel (4.9+), load all relevant vfio-drivers already in initcpio and passing to them any PCI device you wish to handle, e.g. your AMD card. After a reboot you are then ready to edit your VM configuration to include the vfio-device, see VFIO tips and tricks. Performance should be near native.
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
AMD and gpgpu libraries don't traditionally go together very well sadly. I have accepted that cuda is my path of least resistance for gpu. For the xeon phis, centos.

IRL I need to be able to test cameras, video switches, hid monitors, touch devices on a daily basis and doing it in VM's is extremely valuable.
For fun data mining, learning mp, and machine learning on xeon phi, nvidia cuda, intel cpus.
 

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
"Traditionally" but for a few years now AMD using OpenCL has caught up to NVidia CUDA imho, even surpassed if you use double precision.

As for products, what about e.g. AMD FirePro SC9300 x2? Even Google is employing them now in their cloud services. I'd say that is a valid alternative to CUDA.
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
The only big deal about KVM is bigger backup vendors don't have agentless support for them. KVM-only limps like Scale comp come out of the closet with their own tinkered backups but they aren't comparable to what say Veeam has. So... You have to make sure every single component of your new shiny KVM infrastructure is going to play well with the other guys, it might be a little bit more tricky.

P.S. Xen isn't different here :(
 

Net-Runner

Member
Feb 25, 2016
81
22
8
40
KVM is great if you do a common virtualization without any additional and specific bells and whistles. I wouldn't recommend you building your KVM cluster on Ubuntu though and go with RHEL but it's my personal opinion. Xen is a good alternative but KVM's issues are way better documented so it's easier to support it. Proxmox is definitely something worth trying.