Xenserver now fully open

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Xenserver open-source xenserver.org is not the same as XCP xen.org

Xenserver open-source from xenserver.org is the exact same Xenserver you download from Citrix.com
 
Last edited:

dswartz

Active Member
Jul 14, 2011
610
79
28
Correct, it isn't. I was speaking sloppily. My point was: whether I used the official xenserver (commercial) or xcp (free), in neither case was it possible (that I know of) to have a 64-bit dom0. It mattered to me, since I wanted to have ZFS without the need for a virtual appliance.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Correct, it isn't. I was speaking sloppily. My point was: whether I used the official xenserver (commercial) or xcp (free), in neither case was it possible (that I know of) to have a 64-bit dom0. It mattered to me, since I wanted to have ZFS without the need for a virtual appliance.
this is my use case as well so xen is not an option for me
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
So I was thinking about this some more. The reason we run ZFS as a VM inside ESXi and pass the controller through is because we cant install ZFS on ESX.

With Xenserver, we have no need to install it as a VM because Xen is just a set of packages that run on a standard linux kernel. For that matter, there is no need to run them under Dom0.
Why not just install ZoL on CentOS, set it up using your controller and create your NFS/iSCSI shares/luns; then just install xenserver; connect with XenCenter; add a NFS/iSCSI SR using the IP of the Host?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
vmware is not designed to make a single vm go fast. the 10gbe drivers show that clear as day - good luck getting one vm (without sr-iov) to go full speed, esxi 5.1 won't even try since they know one vm probably cannot use the full bandwidth. you end up with a one or two core tx/rx vmdq setup and less than 10gbe bandwidth compared to say bare metal which can throw 12 cores at sending/receiving
 

mmmmmdonuts

Member
Mar 22, 2012
36
0
6
So I was thinking about this some more. The reason we run ZFS as a VM inside ESXi and pass the controller through is because we cant install ZFS on ESX.

With Xenserver, we have no need to install it as a VM because Xen is just a set of packages that run on a standard linux kernel. For that matter, there is no need to run them under Dom0.
Why not just install ZoL on CentOS, set it up using your controller and create your NFS/iSCSI shares/luns; then just install xenserver; connect with XenCenter; add a NFS/iSCSI SR using the IP of the Host?
This was my thought as well. I just don't have enough time at the moment to fool around with making this work. With dswartz, having problems with xcp with the 64bit install as a Dom0 made me concerned a little bit even though there should be 64bit support for xenserver.
 
Last edited:

Mike

Member
May 29, 2012
482
16
18
EU
Since Xen can be managed with libvirt, there are tons of ways to do so. Also Xen does not run on Linux, although it may seem that way.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
With Xen - in all of its incarnations (Xenserver, xen hypervisor, XCP, XAPI on Debian/Ubuntu, etc):

- The hypervisor layer itself is dead stupid (on purpose). All it does is manage the scheduler and provide a path for "domains" (VMs) to send IO requests to the privileged domain - Dom0 - which handles all IO.

- You can't run any filesystem or IO subsystem on the hypervisor itself.

- There is no particular reason why you can't run ZFS or some other advanced filesystem (BTFS, whatever) on Dom0.

- However, the "packaged" versions of Xen - XenServer and XCP - only support a 32-bit Dom0 and ZFS doesn't work well on it because it because the ARC is too limited. Without a lot of work you can't just slap a 64-bit kernel onto XenSever/XCP (though it may be easier now that XenServer is all open source...it would still be a PITA).

- Nitrobase's idea - "Why not just install ZoL on CentOS, set it up using your controller and create your NFS/iSCSI shares/luns; then just install xenserver" - doesn't work exactly this way. You don't "install XenServer" onto an OS. XenServer is an entire package. What you can do is install they key packages that create the same function as XenServer, basically package XAPI and a couple of others. This actually much easier right now with Debian (Wheezy) than CentOS because of some packaging work they've done in the latest release. Its as easy as "aptitude install xen-xapi", do some simple configuration of networking, etc, and reboot. This actually install the Xen hypervisor under your system and when you reboot your OS is running as Dom0.

- Unfortunately, right now, package xen-xapi is not really stable. It is still pretty buggy. Most important bug is that it doesn't work right with the most common management interface, XenCenter, and the "open source" managers are largely inactive projects and just don't work. There is also a problem with access to the correct PV drivers for Windows guests - its a licensing issue with Citrix. But this last part may be solvable now that Citrix has open sourced the whole thing.

- This last method, what Nitrobase24 suggests, is pretty much exactly the same thing that both Danschwartz and I have tried. And abandoned, because the current state of XAPI is not quite ready. I would LOVE for this to work because I think it comes closest to the environment i want to use. Unfortunately, I don't have time to futz with it and make it work.

- So for now I'll stick with Proxmox VE w/ZoL. While it works and is stable (two MAJOR pluses!!!) I just can't stand its "clustering" environment and insistence on using a shared voting database. The shared DB idea is great for true HA environments, but it is a nightmare for simpler uses of clustering - like just being able to move a VM from machine to machine to allow maintenance.
 
Last edited:

w0mbl3

Member
Aug 13, 2013
36
19
8
Sydney, Oz
With Xen - in all of its incarnations (Xenserver, xen hypervisor, XCP, XAPI on Debian/Ubuntu, etc):

- There is no particular reason why you can't run ZFS or some other advanced filesystem (BTFS, whatever) on Dom0.

- However, the "packaged" versions of Xen - XenServer and XCP - only support a 32-bit Dom0 and ZFS doesn't work well on it because it because the ARC is too limited. Without a lot of work you can't just slap a 64-bit kernel onto XenSever/XCP (though it may be easier now that XenServer is all open source...it would still be a PITA).

- Nitrobase's idea - "Why not just install ZoL on CentOS, set it up using your controller and create your NFS/iSCSI shares/luns; then just install xenserver" - doesn't work exactly this way. You don't "install XenServer" onto an OS. XenServer is an entire package. What you can do is install they key packages that create the same function as XenServer, basically package XAPI and a couple of others. This actually much easier right now with Debian (Wheezy) than CentOS because of some packaging work they've done in the latest release. Its as easy as "aptitude install xen-xapi", do some simple configuration of networking, etc, and reboot. This actually install the Xen hypervisor under your system and when you reboot your OS is running as Dom0.
The latest (now fully open source) XenServer 6.2 has a 64-bit hypervisor - I was actually quite impressed with the brief play I had with it last night along with the XenCenter windows-based admin tool, although my current test box doesn't have VT-d (waiting for arrival of a xeon cpu for it), so I couldn't test all the bits I wanted to yet.

I've been running Xen for about 6 years on various home hardware on flavours of Centos; the last 2 years on Scientific Linux with a self-compiled custom kernel and libvirt setup to sort out issues with paravirt pci-passthrough so I could create a virtualised DMZ/Gateway for the home network (on hardware that doesn't support VT-d/IOMMU).

Between the new centos 6 support for Xen (part of Centos-extras) and the fully OS XenServer, I finally have a couple of what appears to be headache-free-maintenance alternatives, and have just built a test server to start planning the next iteration of my virtualised home network. I might, for the first time in 6 years, be able to use an off-the-shelf xen hypervisor!

My initial impressions are that XenServer will be easier to manage creating new development/test VM's with, but it has worse power-management than the latest Centos 6.4 with built-in Xen dom0 support (around 20W different power consumption on the same hardware - presumably worse idle-state and ACPI support in the kernel that XenServer uses for dom0, altho yet to be confirmed).

Adding zfs to my file-server VM (and pci-passing a Dell Perc 310 to it), or running a zfs appliance VM (nas4free etc.) is on my list of things to do this time around, to replace my old (current) fileserver VM's RAID1 setup.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Having a 64 bit hypervisor is not the issue. Xen (both commercial Xen and Xapi) have had a 64-bit hypervisor for a long time.

The issue is no support for 64-bit Dom0. Since Dom0 is responsible for all IO you need a 64-bit Dom0 for certain newer device drivers and - for the purposes of building an AIO system - running a memory-hungry filesystem like ZFS to share with the entire system.

Neither Xapi or Xen 6.2 still does not support a 64-bit Dom0. And running something like 64-bit Centos or Wheezy +Xapi is bug ridden and unstable.
 

Mike

Member
May 29, 2012
482
16
18
EU
Running storage on the virtualisation node is already a niche market, let alone zfs with its huge caches and processing requirements. Neither of those you have enough of on that node, so everybody will agree to seperate them. Vmwarez will never be optimised for the home market (on steroids), neither will the commercial xen variants.
Running xen or kvm on your own will offer you the same featureset, just without the fancy interface and names. Now if you were to use fedora 20 or sid/jessie i would say this would be an adventure, but with these outdated distros it wont get much more stable, no?

Just give it a go, latest kernel and hypervisors and just see how unstable it really is :D
 

w0mbl3

Member
Aug 13, 2013
36
19
8
Sydney, Oz
Having a 64 bit hypervisor is not the issue. Xen (both commercial Xen and Xapi) have had a 64-bit hypervisor for a long time.

The issue is no support for 64-bit Dom0. Since Dom0 is responsible for all IO you need a 64-bit Dom0 for certain newer device drivers and - for the purposes of building an AIO system - running a memory-hungry filesystem like ZFS to share with the entire system.
Wow, I'm stunned - with only a couple of hours of XenServer experience under my belt, I hadn't dug into it too much yet.

I've been running 64-bit dom0's for years and just assumed that the enterprise repackages of xen would support same. I haven't tried to see whether the kernel is built with the pciback module yet either for paravirt passthru (i.e. not using VT-d/IOMMU), but a 32bit dom0 makes XenServer unfit for my needs.

Looks like I'm not going to get to run a straightforward install of someone else's repackage after all.. its probably me, but I always seem to be on the (b)leading edge.

Stability with Xen has only ever been an issue for me when setting up the hardware for an initial build - once each server has run through its teething troubles finding hardware for passthru with good drivers, my dom0 regularly hits > 6 months before I reboot for dom0 software/etc. changes. Hopefully now that Centos 6 officially supports xen, stability will improve.

I do wonder whether the open-sourcing of XenServer and the official support by Centos are subtle indicators that Redhat is going to bring those into RHEL7.

Thanks for the clarifications!
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I think vsan (esxi 5.5) and windows 2012 R2 will beg to differ on SDS. like networking, it has been commoditized into functional bits cheap.

many folks run VSA (lefthand) for core storage. It can be very reliable, and with both windows and esxi fighting for hypervisor space, we'll probably see more and more of it FREE :) just wait!
 

w0mbl3

Member
Aug 13, 2013
36
19
8
Sydney, Oz
Running xen or kvm on your own will offer you the same featureset, just without the fancy interface and names. Now if you were to use fedora 20 or sid/jessie i would say this would be an adventure, but with these outdated distros it wont get much more stable, no?

Just give it a go, latest kernel and hypervisors and just see how unstable it really is :D
I tried an F17 dom0 briefly as a test - but it was pretty broken. It may have improved under F19, however now that Centos officially support xen, I'm going to stick with RHEL6

I installed and set up a dom0 on the latest Centos yesterday.. takes 4 commands after the minimum install:

yum install centos-release-xen
yum install xen
/usr/bin/grub-bootxen.sh
reboot

Its on a test box at present - for reference, its running Xen 4.2.2 and kernel 3.4.59-8.el6.centos.alt.x86_64. Stability unknown yet :)
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Running storage on the virtualisation node is already a niche market, let alone zfs with its huge caches and processing requirements. Neither of those you have enough of on that node, so everybody will agree to seperate them. Vmwarez will never be optimised for the home market (on steroids), neither will the commercial xen variants.
Running xen or kvm on your own will offer you the same featureset, just without the fancy interface and names. Now if you were to use fedora 20 or sid/jessie i would say this would be an adventure, but with these outdated distros it wont get much more stable, no?

Just give it a go, latest kernel and hypervisors and just see how unstable it really is :D
I did "give it a go". I know for sure Wheezy+Xapi is buggy as heck. Didn't really try the Centos version but since it is built from the same source project I really wouldn't expect different results.
 

Mike

Member
May 29, 2012
482
16
18
EU
Wow, I'm stunned - with only a couple of hours of XenServer experience under my belt, I hadn't dug into it too much yet.

I've been running 64-bit dom0's for years and just assumed that the enterprise repackages of xen would support same. I haven't tried to see whether the kernel is built with the pciback module yet either for paravirt passthru (i.e. not using VT-d/IOMMU), but a 32bit dom0 makes XenServer unfit for my needs.

Looks like I'm not going to get to run a straightforward install of someone else's repackage after all.. its probably me, but I always seem to be on the (b)leading edge.

Stability with Xen has only ever been an issue for me when setting up the hardware for an initial build - once each server has run through its teething troubles finding hardware for passthru with good drivers, my dom0 regularly hits > 6 months before I reboot for dom0 software/etc. changes. Hopefully now that Centos 6 officially supports xen, stability will improve.

I do wonder whether the open-sourcing of XenServer and the official support by Centos are subtle indicators that Redhat is going to bring those into RHEL7.

Thanks for the clarifications!
RHEL and Xen? :rolleyes: Really?
They have their enterprise virtualisation with KVM, which looks pretty cool if you ask me.
 

w0mbl3

Member
Aug 13, 2013
36
19
8
Sydney, Oz
RHEL and Xen? :rolleyes: Really?
They have their enterprise virtualisation with KVM, which looks pretty cool if you ask me.
Not actual RHEL - Sci. Linux (its a distro built from RHEL source - similar to Centos).

I had to build a bunch of bits myself (xen, a dom0-capable 64-bit kernel with pciback modules, libvirt to add Xen support rather than just kvm, etc.) but thats not exactly hard, and given how infrequently I replace my dom0 hardware/software, wasn't a huge cost.

Of course, this is for a home server - not enterprise setup.. so only runs about 6 VM's (firewall with dedicated pass-thru NIC, transparent proxy/dansguardian, mail server, file server, intranet/database host, web development VM etc.).

KVM didn't do what I needed at the time (won't pass-through PCI devices without VT-d/IOMMU support,m which HP microserver doesn't have), so wasn't an option. Besides, I've been using Xen since before I owned a CPU with hardware virtualisation support (i.e. using software virtualisation only) - so the scripts/knowledge were already here :)
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
how is xen ranked for support on microsoft? Have you tried shared gpu support aka SR-IOV of video (api-intercept or vgx mode)?

microsoft basically turns your gpu into a transcoder; think quicksync for RDP.

esxi basically runs X and does api-intercept to share one video card amonst many users. somewhat emulated form of SR-IOV of video card.

what does XEN offer as far as gpu-accelerator with more than 1:1 ratio?