[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] [Xen-devel] GSoC 2012 project brainstorming



On Thu, 2012-03-15 at 10:02 +0000, Dave Scott wrote:
> Ian Campbell wrote:
> > On Wed, 2012-03-14 at 17:55 +0000, Dave Scott wrote:
> > Do you handle import as well as export? One of the more interesting use
> > cases (I think) is handling folks who want to migrate from an xm/xl
> > based setup to a xapi setup (e.g. by installing Kronos on their
> > existing
> > Debian system). That's was the primary aim of the proposed project.
> 
> IIRC it can import simple things, but it's quite incomplete. If the
> goal is to migrate from an xm/xl setup to xapi then it probably makes
> more sense to use the existing (and by definition correct) xl config
> parser and then talk the XenAPI directly.

That was my line of thinking.

>  Or emit a xapi "metadata export".

Hadn't considered this one -- how well specified is that format?

Another thing I'd wondered about was the ability to consume/exhume OVA
(or is it OVF?) thing.

> One of the interesting areas will be storage...
> 
> > > [root@st20 ~]# cat win7.xm
> > > name='win7'
> > > builder='hvmloader'
> > > boot='dc'
> > > vcpus=1
> > > memory=2048
> > > disk=[ 'sm:7af570d8-f8c5-4103-ac1d-969fe28bfc11,hda,w', 'sm:137c8a61-
> > 113c-ab46-20fa-5c0574eaff77,hdb:cdrom,r' ]
> > 
> > Half-assed wondering -- I wonder if sm: (or script=sm or similar)
> > support could work in xl...
> 
> Yeah I've been wondering that too. As well as tidying up the domain
> handling code in xapi I've also been trying to generate docs for the
> xapi <-> storage interface (aka "SMAPI"). The current version is here:
> 
> http://dave.recoil.org/xen/storage.html
> 
> I'd also like to make the current storage plugins run standalone (currently
> they require xapi to be running). If we did that then we could potentially
> add support for XCP storage types directly into libxl (or the hotplug 
> scripts).
> 
> As well as generate docs for the SMAPI I can also generate python skeleton
> code, to make it easier to write storage plugins. A custom one of these
> might make it easier to migrate from xm/xl to xapi too, by leaving the
> disks where they are, rather than moving them into a regular SR.

That could be a good plan. Given such an SR plugin could you then do
some sort of "xe vdi-move" to move a VDI from that plugin to one of the
"standard" ones?

> 
> > 
> > > vif=[  ]
> > > pci=[  ]
> > > pci_msitranslate=1
> > > pci_power_mgmt=0
> > > # transient=true
> > >
> > > Another goal of the refactoring is to allow xapi to co-exist with
> > domains
> > > created by someone else (e.g. xl/libxl). This should allow a
> > migration to
> > > be done piecemeal, one VM at a time on the same host.
> > 
> > The brainstoming list below includes "make xapi use libxl". Is this (or
> > a subset of this) the sort of thing which could be done by a GSoC
> > student?
> 
> I think a subset could probably be done in the GSoC timeframe. Before the
> summer starts I should have merged my refactoring into the xapi mainline.
> A student could then fire up the ocaml libxl bindings (they might need a bit
> of tweaking here or there) and then start patching things through. All the
> critical libxc code is now called (indirectly) via a single module:
> 
> https://github.com/djs55/xen-api/blob/cooper/ocaml/xenops/xenops_server_xen.ml

That's a surprisingly (in a good way) small amount of code!

> If that were made to use libxl then the job would be basically done. All that
> would be left would be little things like statistics gathering.
> 
> 
> > 
> > I suppose it is only fair that I offer to be co-/backup-mentor to a
> > main
> > mentor from the xapi side of things for such a project...
> 
> Great :-)

Was that the sound of you offering to be (or to find a) main mentor ;-)

Ian.


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.