[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH V2] libxenstore: filter watch events in libxenstore when we unwatch



Julien Grall writes ("[PATCH V2] libxenstore: filter watch events in 
libxenstore when we unwatch"):
> While on entry to xs_unwatch, there may be an event on its way from
> xenstored (eg in the ring or in the local kernel), all such events
> will definitely come before the reply to the unwatch command.  So at
> the point where the unwatch reply has been processed (after xs_talkv),
> any such now-deleted watch events will definitely have made it to
> libxenstore's queue where we can remove them.

Thanks.

> +
> +/* Clear the pipe token if there are no more pending watchs.
> + * We suppose the watch_mutex is already taken.
> + */
> +static void xs_clear_watch_pipe(struct xs_handle *h)

I think this would be better called xs_maybe_clear_watch_pipe or
something.  Since it doesn't always clear the watch pipe, it makes the
call sites confusing.

> @@ -855,14 +864,62 @@ char **xs_read_watch(struct xs_handle *h, unsigned int 
> *num)
>  bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
>  {
>       struct iovec iov[2];
> +     struct xs_stored_msg *msg, *tmsg;
> +     bool res;
> +     char *s, *p;
> +     unsigned int i;
> +     char *l_token, *l_path;
>  
>       iov[0].iov_base = (char *)path;
>       iov[0].iov_len = strlen(path) + 1;
>       iov[1].iov_base = (char *)token;
>       iov[1].iov_len = strlen(token) + 1;
>  
> -     return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
> -                             ARRAY_SIZE(iov), NULL));
> +     res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
> +                                                ARRAY_SIZE(iov), NULL));
> +
> +     /* Filter the watch list to remove potential message */
> +     mutex_lock(&h->watch_mutex);
> +
> +     if (list_empty(&h->watch_list)) {
> +             mutex_unlock(&h->watch_mutex);
> +             return res;
> +     }

I still think this is redundant, so it should not be here.

> +     list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
> +             assert(msg->hdr.type == XS_WATCH_EVENT);
> +
> +             s = msg->body;
> +
> +             l_token = NULL;
> +             l_path = NULL;
> +
> +             for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
> +                     if (*p == '\0')
> +                     {
> +                             if (i == XS_WATCH_TOKEN)
> +                                     l_token = s;
> +                             else if (i == XS_WATCH_PATH)
> +                                     l_path = s;
> +                             i++;
> +                             s = p + 1;
> +                     }
> +             }
> +
> +             if (l_token && !strcmp(token, l_token)
> +                     /* Use strncmp because we can have a watch fired on 
> sub-directory */

Oh bum.  I see a problem.  This is quite a bad problem.

It is legal to do this:

   client:

     XS_WATCH /foo token1
     XS_WATCH /foo/bar token1

Now if you get a watch event you get the token and the actual path.
So suppose we have, simultaneously:

   our client:                          another client:
     XS_UNWATCH /foo/bar token1          WRITE /foo/bar/zonk sponge

Then xenstored will generate two watch events:
                                WATCH_EVENT /foo/bar/zonk token1
                                WATCH_EVENT /foo/bar/zonk token1

With your patch, both of these would be thrown away.  Whereas in fact
one of them should still be presented.

How confident are we that there are no clients which rely on this not
changing ?

At the very least this needs a documentation patch explaining the
undesirable consequences of setting multiple watches with the same
token.  Arguably it needs a flag to the library (or a new function
name) to ask for this new behaviour.

I would have to think, for example, about whether the new libxl event
sub-library would make any mistakes as a result of this proposed
change.  I think it wouldn't...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.