[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] running latest dev versions of xenopsd on Debian/Ubuntu


  • To: xen-api@xxxxxxxxxxxxx
  • From: Lars Kurth <lars.kurth@xxxxxxx>
  • Date: Thu, 17 Jan 2013 14:17:00 +0000
  • Delivery-date: Thu, 17 Jan 2013 14:17:45 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

Scott,

thank you for this. I am going to add categories to the page, such that they can be more easily found.

Please make sure you do this in future. Uncategorized pages are really hard to find on the wiki. More info at:
- http://wiki.xen.org/wiki/Categories_for_Authors
- http://wiki.xen.org/wiki/Category:Templates also useful
Regards
Lars

On 11/01/2013 18:58, James Bulpin wrote:
Feeling brave I've tried this on CentOS 6.3 with Xen 4.2. Here's
how it went...

I used the "Software development workstation" installation base for
CentOS 6.3 (x86_64) which meant I got a certain amount of virtualisation
infrastructure including a "virbr0" NATed bridge. I prefer basic L2
bridging so I manually created a xenbr0 in the traditional manner. I
installed Xen 4.2.1 and a suitable 3.4 kernel.

I chose to compile vncterm from the source at
https://github.com/xen-org/vncterm - this uses the old xs.h filename for
the xenstore headers so I have to change this to xenstore.h to make it
build against 4.2.

I used the latest binary download of opam to install ocaml 4.00.1. Being
a local installation this meant the paths differed from Dave's tutorial
being ~/.opam/4.00.1/ instead of ~/.opam/system/

I hit an error building xenopsd because it was depending on xenctrl 4.1.0
rather than just the latest version - Dave fixed this in his repo.

To run xenopsd I have to uncomment the vncterm config line in
xenopsd.conf to point to the binary. In the same file I had to uncomment
the hvmloader line and remove the quotes (xenopsd was treating them
literally)

For the VM storage I chose to use LVM to fit with the current "phy:"
constraint.

Trying to start a PV guest hit a few problems:

  1. xenopsd gave an error saying it had no bootable devices - this was
     because the devices list was hard wired to empty in the CLI, Dave
     fixed this in the repo

  2. xenopsd assumes that pygrub has the --default_args, --extra_args and
     --vm arguments present in the XenServer/XCP version of pygrub but not
     present in the Xen 4.2 version. I patched the latter to add these.

  3. xenopsd complains of an invalid result from the domain builder. Dave
     is currently working on this.

Trying a HVM guest led to a different set of challenges:

  1. xenopsd was hanging waiting to plug the VBD - this turns out to be
     because the hotplug scripts were not running because xl had disabled
     them (by default xl runs these scripts itself but this can be
     overridden by setting "run_hotplug_scripts=0" in /etc/xen/xl.conf). I
     suspect this problem will go away when Rob does the xenopsd libxl
     port.

  2. xenopsd is using a VIF hotplug scripts that is very XenServer/XCP
     centric and doesn't run to completion on CentOS 6.3. This was
     configured in the automatically generaled xenopsd.conf so I instead
     changed this to use /etc/xen/scripts/vif-bridge instead.

  3. xenopsd doesn't put the "bridge" key in the usual backend xenstore
     location (/local/domain/0/backend/vif/<domid>/<device>) so the Xen
     4.2 vif script defaulted to the first bridge, not the one I had
     configured.

  4. xenopsd was hanging waiting for a hotplug event on the VIF but was
     watching /xapi/<domid>/vif/<device>/hotplug which would never be
     touched by Xen 4.2 hotplug scripts. To workaround this I manually
     created that entry after each VM start

  5. qemu-dm-wrapper would always silently fail when run by xenopsd but
     not when run interactively. This is because it tried to setrlimit a
     value with the soft limit being higher than the hard limit. I put in
     a simple fix to avoid this.

  6. qemu-dm-wrapper would always fail because it hard-coded the path to
     xenstore-write in /usr/sbin but it's in /usr/bin on my system. I
     fixed this locally but w search path will be useful in the future.

So in summary with a few tweaks and workarounds I can no run HVM guests
on CentOS 6.3 using xenopsd. I encourage others to try this on your
distro of choice.

Bugs have been filed at https://github.com/xen-org/xenopsd/issues

Cheers,
James

-----Original Message-----
From: xen-api-bounces@xxxxxxxxxxxxx [mailto:xen-api-
bounces@xxxxxxxxxxxxx] On Behalf Of Dave Scott
Sent: 04 January 2013 15:44
To: xen-api@xxxxxxxxxxxxx
Subject: [Xen-API] running latest dev versions of xenopsd on
Debian/Ubuntu

Hi,

I've written a wiki page describing how to build the latest development
version of "xenopsd" (and its dependencies) from source:

http://wiki.xen.org/wiki/Building_Xenopsd

"xenopsd" is the name of the domain manager of the XCP toolstack -- it
is responsible for starting, stopping, migrating VMs.

Being able to build the development version is really useful if you
want to:
* check out a new feature
* reproduce a bug
* test a fix

Let me know if you try this and have any problems or suggestions!

There are a couple more components of the XCP toolstack that need some
fixups so they can build easily, in particular:
* squeezed: manages memory ballooning
* networkd: configures VM networking
* rrdd: collects and archives performance statistics
* xapi: manages the overall resource pool

Hopefully we can work on these one-by-one until they are all as easy to
build as xenopsd.

Cheers,
Dave

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.