[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] /proc/xen/xenbus supports watch?



On Fri, 2005-09-23 at 10:17 +0100, Keir Fraser wrote:
> However, I'm not clear yet what each separate transport page 
> represents. Is it a single transaction, or a connection that stores 
> multiple watches and one transaction at a time? If the latter, 
> save/restore gets a bit harder as the transport pages must be 
> automatically re-registered and watches re-registered...

I was assuming per-connection, ie. every time a tool
opens /proc/xen/xenbus we map a new page/event channel, and free on
close.  ie. the drivers in the domU kernel share the store comms page
created with the domain, tools in the domU get a separate page each,
tools in dom0 connect to the unix domain socket each.  To the store, one
connection, one transport.  (Each connection can set up multiple watches
of course, and Christian has been hinting that he wants multiple
transactions too, but that's another argument).

So my proposal is (for those who came in late):

[xenstored]
 |
 |--> /var/run/xenstored/socket <-- libxenstore <-- tool in dom0
 |                              <-- libxenstore <-- tool in dom0
 |                              ...
 |--> /var/run/xenstored/socket_ro <-- libxenstore <-- tool in dom0
 |                                 <-- libxenstore <-- tool in dom0
 |                                 ...
 |--> shared page 100/evtchn 100 <-- xenbus/xenbus_xs.c <- domU 1 kernel
 |--> shared page 200/evtchn 200 <-- xenbus/xenbus_xs.c <- domU 2 kernel
 |--> shared page 300/evtchn 300 <-- xenbus/xenbus_xs.c <- domU 3 kernel
 |  ...
 |--> shared page 1000/evtchn 1000 <-- xenbus/xenbus_dev.c <-- libxenstore <-- 
tool in domU 1
 |--> shared page 1001/evtchn 1001 <-- xenbus/xenbus_dev.c <-- libxenstore <-- 
tool in domU 1
 |--> shared page 1002/evtchn 1002 <-- xenbus/xenbus_dev.c <-- libxenstore <-- 
tool in domU 1
 |  ...

This differs from what we have: xenbus/xenbus_dev.c uses the same shared
page as xenbus/xenbus_xs.c, and there's one big lock (xenbus_lock) so
drivers can't use xenbus while the device is open.  We all agree that is
suboptimal.

So on domain save, we just force close everyone who has /proc/xen/xenbus
open.  This closing, like normal closing, frees all those pages and
channels, sends XS_UNINTRODUCE to store to tell it to unmap.

On restore, libxenstore gets EBADF from the /proc/xen/xenbus fd.  It
reopens the fd, reregisters watches, and continues.  This turns out to
be *exactly* the same as the case where a tool in dom0, using
libxenstore to talk to xenstored via socket, sees the xenstored restart.
ie. we need that code anyway.

In summary, I think this is the cleanest way to make /proc/xen/xenbus a
first-class citizen and more robust, involves minimal changes, and
doesn't complicate save/restore much at all.

Cheers!
Rusty.
-- 
A bad analogy is like a leaky screwdriver -- Richard Braakman


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.