[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xenconsole: Ensure exclusive access to console using locks

Martin Lucina writes ("[PATCH] xenconsole: Ensure exclusive access to console 
using locks"):
> If more than one instance of xenconsole is run against the same DOMID
> then each instance will only get some data. This change ensures
> exclusive access to the console by creating and obtaining an exclusive
> lock on <XEN_LOCK_DIR>/xenconsole.<DOMID>.

It is a shame that we are still using ptys for the xenconsole
connection.  If it were a socket we could allow as many clients as we
like.  But I haven't got round to fixing this for the last n years so
I think the general plan you have makes sense.

> +static void console_lock(int domid)
> +{
> +     lockfile = malloc(PATH_MAX);
> +     if (lockfile == NULL)
> +             err(ENOMEM, "malloc");
> +     snprintf(lockfile, PATH_MAX - 1, "%s/xenconsole.%d", XEN_LOCK_DIR, 
> domid);

Why not use asprintf ?

> +     lockfd = open(lockfile, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR);
> +     if (lockfd == -1)
> +             err(errno, "Could not open %s", lockfile);
> +     if (flock(lockfd, LOCK_EX|LOCK_NB) != 0)
> +             err(errno, "Could not lock %s", lockfile);
> +}

This locking strategy is not safe if the lockfile is ever unlinked,
because it allows:

    A   open         flock              .o{ I have the lock }
    B        unlink

    C                      open flock   .o{ I have the lock }

> +static void console_unlock(void)
> +{
> +     if (lockfd != -1) {
> +             flock(lockfd, LOCK_UN);
> +             close(lockfd);
> +     }
> +     if (lockfile != NULL)
> +             unlink(lockfile);

And this unlinking strategy is not safe even if we are more careful
with our locking.  You must only unlink with the lock held.

You should use the same recipe as with-lock-ex (from chiark-utils)

(which we also use in tools/hotplug/Linux/locking.sh and


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.