[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/9] libxl_internal: Create new lock for devices hotplug via QMP



On Tue, Jun 04, 2019 at 06:11:23PM +0100, Ian Jackson wrote:
> Anthony PERARD writes ("[PATCH 4/9] libxl_internal: Create new lock for 
> devices hotplug via QMP"):
> > The current lock `domain_userdata_lock' can't be used when modification
> > to a guest is done by sending command to QEMU, this is a slow process
> > and requires to call CTX_UNLOCK, which is not possible while holding
> > the `domain_userdata_lock'.
> ...
> > +struct libxl__domain_qmp_lock {
> > +    libxl__generic_lock lock;
> > +};
> > +
> > +libxl__domain_qmp_lock *libxl__lock_domain_qmp(libxl__gc *gc,
> > +                                               libxl_domid domid)
> > +{
> > +    libxl__domain_qmp_lock *lock = NULL;
> > +    int rc;
> > +
> > +    lock = libxl__zalloc(NOGC, sizeof(*lock));
> > +    rc = libxl__lock_generic(gc, domid, &lock->lock,
> > +                             "libxl-device-changes-lock");
> > +    if (rc) {
> > +        libxl__unlock_domain_qmp(lock);
> > +        return NULL;
> > +    }
> > +
> > +    return lock;
> 
> This is almost identical to the libxl__lock_domain_userdata which
> appeared in the previous patch.  That suggests that the factorisation
> here is at the wrong layer.
> 
> > +void libxl__unlock_domain_qmp(libxl__domain_qmp_lock *lock)
> > +{
> > +    libxl__unlock_generic(&lock->lock);
> > +    free(lock);
> > +}
> 
> This is completely identical to libxl__unlock_domain_userdata.  The
> only reason we have to two of these is so that the two locks are
> distinguished by the type of the lock struct.  That means that you
> have to call
>    libxl__unlock_domain_qmp(foo->qmp_lock)
> but
>    libxl__unlock_domain_userdate(foo->userdata_lock)
> or some such, and the compiler will spot if you write
>    libxl__unlock_domain_userdata(foo->qmp_lock)
> or something.  But is it really useful to have to write the `qmp' vs
> `userdata' thing twice ?

So, instead of that interface, how about a different one that uses the
same C type for both kind of lock?

    libxl__lock *libxl__lock_domain_userdata(libxl__gc *, libxl_domid);
    libxl__lock *libxl__lock_domain_qmp(libxl__gc *, libxl_domid);
    void libxl__unlock(libxl__lock *);

Or maybe avoid having two functions for locking and use a #define/enum
instead:
    libxl__lock_domain(gc, domid, LOCK_USERDATA);
    libxl__lock_domain(gc, domid, LOCK_QMP);

I'm not sur what should the last parameter be. Either a string or a
enum. An enum would be better because the lock filename would be all in
a single location in the source code.

What do you think? Would the first proposal be enough to avoid having to
write `userdata' or `qmp' twice on unlock?

> > + * It is to be acquired by an ao event callback.
> 
> I think there is a worse problem here, though.  This lock is probably
> *inside* some lock from the caller.  So usual libxl rules apply and
> you may not block to acquaire it.
> 
> Ie libxl__lock_domain_qmp must be one of these things that takes a
> little ev state struct and makes a callback.
> 
> At the syscall level, acquiring an fcntl lock cannot be selected on.
> The options are to poll, or to spawn a thread, or to fork.
> 
> Currently libxl does not call pthread_create, although maybe it could
> do.  To safely create a thread you have to do a dance with
> sigprocmask, to avoid signal delivery onto your thread.
> 
> Maybe it would be better to try once with LOCK_NB, and to fork if this
> is not successful.  But it would be simpler to always fork...

After our talk IRL, I'll go the fork route.
Also, I'm thinking to always fork when libxl is built with "debug=y",
and allow the optimisation of trying first with LOCK_NB when built with
"debug=n", so the forked code will actually be tested regulary.

Thanks,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.