[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xenctld - a control channel multiplexing daemon

  • To: Anthony Liguori <aliguori@xxxxxxxxxx>
  • From: Andrew Warfield <andrew.warfield@xxxxxxxxx>
  • Date: Fri, 21 Jan 2005 16:39:46 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 21 Jan 2005 16:41:23 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:references; b=QyJUnRXxWcyvqhYG/005zTTJ7QHY8J/92Xe9dGGCTC3D55NivrvRNKXxe6S+IC9pCZM7gC5vTNRTkQ3kXdTD5JBRPQM8VQq4F+mcmSte5uI29l4F5hTl6J7kFbWs9r7skhZHY7d5IwZ+do6m5tktx9YlW7JExZ2WoW8XDgN1d6Y=
  • List-id: List for Xen developers <xen-devel.lists.sourceforge.net>

> 1) Change xcs to use unix domain sockets.

The original thought behind using ip sockets behind xcs was in
planning for cluster management.  Obviously there are other ways to
map this in, I'm not terribly fussy on this point.

> 2) Add support to xcs to export ptys (storing info in the filesystem
> much the same way xenctld does)

I would much rather see xcs only handle control messages, and see the
console stuff broken off onto a seperate split driver.  ptys/ip
sockets/whatever can then be mapped in at the backend independent of
how the control plane works.  I just chatted with keir about this, and
he agrees that pulling console messages off onto their own rings and
providing a separate backend for them would be a good thing.

> 3) Change xenctld tools to use xcs.

sure... although i think there is a fair bit more involved in
create/destroy than what those tools are providing.

> 4) Factor out most of xen interaction in xcs to standard libraries.

Much of the xen interaction in xcs _is_ already in shared libraries
(libxc and libxutil).  The control channels can only be safely bound
by a single consumer in dom0 -- xcs just serves to multiplex this
access.  The interfaces in xcs could probably do with some cleaning
up, as they are reflective of my pulling a lot of the structure out of
the python code in December.  I'm not sure what bits of it you'd like
to see generalized out of xcs though...  can you be more specific?
> I see a three level architecture, the first level being highly portable
> libraries that simplify interacting with Xen.  This would target every
> platform Xen runs on.
>  ...

This is what we have been shooting for with the new control tools: 1.
libxc/xutil, 2. xcs, 3. higher-level tools.

> Thoughts?  I'm willing to code these things up.  Just want to make sure
> it's agreeable first.

Our current plan with the control tools is in fixing up a couple of
things (1) how vm state is represented in dom0, and (2) how easy it is
to add and maintain new tools, drivers, etc.

xcs is a first step, in that it allows new tools be run along side xend.

The next step, coming soon, will be a 'persistent store' which will be
a repository for vm state.  This will hold things like active domain
configurations, device details, etc.

In addition to this, we have been discussing the option of adding
endpoint addressing to the control messages.  So driver setup, for
instance, would move toward a control tool pushing details into the
store prior to starting the domain.  the frontend driver would then
query the store for the backend details, and then address the backend
directly.  This should make extending the tools considerably easier.

This is all very high on the to do list, and should start to emerge
over the next while.  It would be great to discuss design points in
more detail on the list.

Things are a little busy here right now, so i hope this isn't too brief. ;)


This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag-&-drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.