removing ESXi from the all-in-one

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mixer

Member
Nov 26, 2011
92
0
6
I've been doing some testing, and may make a rash decision soon, so please talk me down if I'm nuts here.

I'm thinking of taking my ESXi 5.0 all-in-one host that runs Gea's napp-it on OpenIndiana (in a VM with on-board SATA passthrough to it) and making it an OpenIndiana or OmniOS machine that runs napp-it (physical-izing it if you will) with VMs running in KVM.

• Why? For one, I don't like having to manage ESXi with a Windows machine. I know most management can be done with CLI on Linux but I did some reading and it seems daunting. On the rare occasion I need to view a VM's console or desktop I have to boot up my Windows XP Virtualbox VM on my MacBook Pro, fire up vSphere, and view the console which even then doesn't always work well with text entry or mousing.

Also, the limited feature set for ESXi free version (difficult to backup VMs). And did passthrough of onboard SATA start working again yet after the 5.1 update? VMWare could kill the free edition at any time.

Also I don't feel 100% comfortable with the whole NFS datastore thing - I'd rather have the VMs right there on native ZFS so I can snapshot them, archive them, etc.

One less layer of complexity overall.

• What am I giving up? 10Gb ethernet links, which I don't need since I'm not using networked datastores anymore. 'Pretty' gui from vSphere -- I might miss that, some value in what it can show. Memory over-commitment -- at 32Gigs of RAM I'll be fine for VMs and ARC I think, no need to over-commit RAM to VMs. Probably some other features I'm not thinking of...

• What am I gaining? As mentioned, VMs stored on local ZFS pool. Super easy and great performing VNC connections to guest OSes, more work done on the host hardware as opposed to in VMs -- my household file serving needs will be handled by napp-it/OI, ability to use 'Zones' instead of KVM machines where appropriate, fewer things to update (ESXi, Virtualbox, my Windows XP VM).

------------Will it work reliably though???

I know QEMU-KVM is pretty new on OI/OmniOS but am I correct that KVM is used by many Virtual Hosting companies, so generally it's considered stable and legit? The SmartOS people said they found no bugs at all during their porting to Illumos!

I ran some Geekbench tests and guests seem to perform well on the CPU side of things. VM to VM I got almost 800 Mbps using e1000 virtual nics (iperf). I haven't done any disk speed tests yet.

I'll probably only be running a VM doing Owncloud (for professional file sharing needs), a CentOS machine doing email (atmail I think), pfsense router, Vortexbox (media streaming/transcoding for home use), and whatever else I want to play with for a while.

And to answer the question, why not SmartOS? Well, because no napp-it -- and I have been enjoying my napp-it replication system which will continue to function with an OmniOS or OI-based system.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
HaHa I have been looking into this all weekend spurred by Patrick's thread about the site and a backup server I want to do. Biggest hurtle is a frontend, sure CLI probably isn't to hard to learn, but I like simple.

AQEMU does not do remote servers.
Maybe virt-manager
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
HaHa I have been looking into this all weekend spurred by Patrick's thread about the site and a backup server I want to do. Biggest hurtle is a front end, sure CLI probably isn't to hard to learn, but I like simple.

AQEMU does not do remote servers.
Maybe virt-manager
Interesting. One crazy idea was trying to do HA Solaris variant on two of the nodes. At some point, simplicity will take over. It's really too bad someone hasn't done a nice UI with a storage layer that did Lustre or something for the entire Hypervisor group. I think that's kinda what OnApp's paid offering does.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Yea Lustre would be sweet. Too bad the 2.0 release only works on RHEL and OEL. Both of which you cant get for free.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Looks to be supported on CentOS (white label of RHEL) which is free.
Didnt see that on the lustre website last time I looked. Unless Lustre is baked into the Linux Kernel? The specifically state that Fedora is Client only which is the FOSS derivative of RHEL.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
CentOS aims to be bin compatible with RHEL. The way I see it, Fredora has little to do with RHEL except RPM and RedHat sponsorship.
 

Thatguy

New Member
Dec 30, 2012
45
0
0
For my home setup, I was virtualizing a couple windows boxes (AD at home, for whatever reason, and OWA for personal stuffs) utorrent under win32, and air video server for serving video to my idevices.

Box was: Dual Opteron 4170HE 48GB ram, RHEL 6.x then CentOS when my subscription expired.
Box became: Dual e5-2620 64GB ram, kept CentOS for a while, then went Ubuntu & ZFSOnLinux, now ESXi 5.1 & OmniOS

I left KVM because of several reasons. The VirtIO (enhanced performance driver for network/disk access) under windows, would randomly die, without warning at least for networking. This was using the RHEL endorsed/supplied drivers. I also found performance under KVM to be quite terrible. My windows machines at idle would eat up one of my opteron cores. This didn't get much better when I migrated to the lastest generation e5. Doing simple things like, attaching a cdrom drive, adding drives, etc. I found annoying with virt-manager.

I did like libvirtd with KVM, and virt-manager was nice. plus the ability to set the console for a VM to a random port you can connect to via VNC from any host is very, very nice. @OP I also have a macbook, and find it annoying to boot parallels just to launch vsphere when I want a console.

I don't have any quantifiable data, but I feel that ESXi is faster and more responsive than qemu-kvm. I also have some formal training with ESXi so I personally feel more comfortable in its UI, and can get the things done that I want to do very quickly. Also the internal 10g stuff for platforms that support it (most now) is fun, plus theres very fine granularity for resource control.

PCI passthrough is mostly fixed in 5.1. They released a patch in nov'ish that corrected the pink screen of death for the most part. It is unlikely VMWare will drop the free version, as it does nothing but make them money, either through new sales, or through small corps who deploy it, have no idea what they're doing, then buy support contracts.

I run 5.1, passthrough a 9265-8i and a 9211-8i to one VM, and a 90??-8e to another vm without incident (so far)
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Cloundmin GPL is out. Also, OmniOS has no libvirt support, so most of the KVM tools for Linux look to be out.
 

Mike

Member
May 29, 2012
482
16
18
EU
Your problems with the windows vms eating up cpu could be related to virtio drivers OR the stupid virtual tablet device added with most virt-manager deployments. It eats up cpu like crazy and i havent really looked into it for why and all. Just delete it and watch the load drop. Also very nice with KVM is the SPICE support from the host, instead of the ancient VNC. Also memory overcommit is possible with KVM and memory management other than that is quite similar to esxi, without the 32gb free limit ofcourse. Enable KMS with a somewhat new kernel for that. Also with the last few kernel releases you can get access to VFIO on the IOMMU bit. Haven't tried it myself as i dont think libvirt has support for it (yet?).

Has anybody tried openvswitch yet?
 
Last edited:

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I didn't know that OpenSolaris derivatives don't work with KVM. Great to know.
 

mixer

Member
Nov 26, 2011
92
0
6
Joyent ported KVM to their SmartOS in early 2011, now also works on OmniOS and OpenIndiana.
 

soundscribe

Member
Mar 26, 2012
33
1
8
I'm also considering removing Esxi from the mix. I've been running a Napp it/OI VM host, (using Virtualbox) for about a year now and it's worked very well. I'm not sure I want to add another layer of complexity to the overall architecture.

Virtualbox on OI has proven to be very stable. VMs stay up until I shut them down - both windows and linux - and I never need to touch the OI host itself. It's been up 165 days now. My linux Asterix server running in VM has been up for 2 months, etc. GUI VM performance is a bit slow, but that's more a function of old hardware than the VM software. When I run Virtualbox on my I5 desktop it's quite snappy.

So, given that, are there compelling reasons to use Esxi? I suppose performance is somewhat better but my needs are modest. If I want to clone a VM or move them around, it's trivial.

Don't mean to thread hi-jack, just chiming in with my thoughts on the same topic.