[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 01/17] libxl: fix stubdom console destruction



Stefano Stabellini wrote:
> On Tue, 24 Jul 2012, Roger Pau Monne wrote:
>> Ian Campbell wrote:
>>> On Mon, 2012-07-23 at 18:27 +0100, Roger Pau Monne wrote:
>>>> Stubdoms have several consoles attached, and they don't follow the
>>>> xenstore protocol for devices, since they are always in state 1. We
>>>> have to add an exception to libxl__initiate_device_remove, so libxl
>>>> doesn't wait for them to reach state 6 (Closed).
>>>>
>>>> Report: http://markmail.org/message/yqgppcsdip6tnmh6
>>>>
>>>> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
>>>> Reported-by: Ian Campbell <ian.campbell@xxxxxxxxxxxxx>
>>>> Signed-off-by: Roger Pau Monne <roger.pau@xxxxxxxxxx>
>>>> ---
>>>>  tools/libxl/libxl_device.c |    6 ++++--
>>>>  1 files changed, 4 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
>>>> index a94beab..c4392fa 100644
>>>> --- a/tools/libxl/libxl_device.c
>>>> +++ b/tools/libxl/libxl_device.c
>>>> @@ -592,8 +592,10 @@ void libxl__initiate_device_remove(libxl__egc *egc,
>>>>          LOG(ERROR, "unable to get info for domain %d", domid);
>>>>          goto out;
>>>>      }
>>>> -    if (QEMU_BACKEND(aodev->dev) &&
>>>> -        (info.paused || info.dying || info.shutdown)) {
>>>> +    if ((QEMU_BACKEND(aodev->dev) &&
>>>> +        (info.paused || info.dying || info.shutdown)) ||
>>>> +        (libxl_is_stubdom(CTX, aodev->dev->domid, NULL) &&
>>>> +        (aodev->dev->backend_kind == LIBXL__DEVICE_KIND_CONSOLE))) {
>>> Is this actually specific to stubdom consoles or is that just where
>>> we've noticed it?
>> It's just that I've noticed it with stubdoms, which AFAIK are the only
>> domains that can have more than one console.
> 
> Linux can handle multiple PV consoles

But there's no way to create a domain from xl with multiple consoles, or
at least I haven't been able to find any.

> 
>>> Does it apply to LIBXL__CONSOLE_BACKEND_IOEMU as well
>>> as ..._XENCONSOLED? I took a look through tools/console and I cannot
>>> find any handling of a state node in xenstore at all, so the XENCONSOLED
>>> case seems clear. I notice that xen_console.c registers the device with
>>> DEVOPS_FLAG_IGNORE_STATE but that only seems to affect startup not
>>> teardown. I don't see a qemu_chr_close (or anything similar) anywhere in
>>> hw/xen_console.c
>> So it should apply to any console device? This means I don't have to
>> wait for any device of type LIBXL__DEVICE_KIND_CONSOLE, and there's no
>> need to check for the specific console type or Qemu.
> 
> xen_console registers a disconnect handler, con_disconnect, that should
> be able to unbind the evtchn and unmap the ring.
> "disconnect" is called by xen_backend, if the backend and frontend
> states are 5 or 6.

Yes, but I don't see that con_disconnect sets the state to 6 after doing
the cleanup, neither xenconsoled does so. I think con_disconnect should
set xendev->be_state = 6, so the parent function xen_be_disconnect would
notice the state change and write it to xenstore.

Also, this only happens when a domain has multiple consoles, since the
first console is a special case and is processed separately, but the
rest of consoles are processed using the "normal" unplug mechanism
(libxl__initiate_device_remove). Maybe we should treat all consoles in
the same way, and unplug them using the same method that we use for the
first console?

There's a comment before destroying the first console that states:

/* Currently console devices can be destroyed synchronously by just
 * removing xenstore entries, this is what libxl__device_destroy does.
 */
libxl__device_destroy(gc, dev);

It makes me think if the appropriate solution would be to call
libxl__device_destroy for all console devices, not only for the first one.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.