[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v3 0/9] libxl: New slow lock + fix libxl_cdrom_insert with QEMU depriv
Hi, Changes in v3: - renamed libxl__ev_lock to libxl__ev_devlock - rebased - 1 patch not acked "libxl_internal: Introduce libxl__ev_devlock for devices hotplug via QMP" - other patches have been updated for the new ev_devlock name and for the rebased Changes in v2: - New libxl__ev_lock, which actually respect lock hierarchy (it's outside of CTX_LOCK). - some smaller changes detailed in patch notes. This patch series fix libxl_cdrom_insert to work with a depriviledge QEMU. For that, we need to use libxl__ev_qmp. For that, we need a new lock because userdata_lock can't be used anymore. FYI: I don't think that enough yet to migrate a depriviledged QEMU. We may need to open disks/cdrom in libxl before starting QEMU, similar to what this patch series do when inserting a new cdrom. Patch series available in this git branch: https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.libxl-slow-lock-v3 Anthony PERARD (9): libxl_internal: Remove lost comment libxl: Pointer on usage of libxl__domain_userdata_lock libxl_internal: Introduce libxl__ev_devlock for devices hotplug via QMP libxl: Add optimisation to ev_lock libxl_disk: Reorganise libxl_cdrom_insert libxl_disk: Cut libxl_cdrom_insert into steps .. libxl_disk: Implement missing timeout for libxl_cdrom_insert libxl: Move qmp_parameters_* prototypes to libxl_internal.h libxl_disk: Use ev_qmp in libxl_cdrom_insert tools/libxl/Makefile | 3 + tools/libxl/libxl_disk.c | 341 ++++++++++++++++++++++++++++------- tools/libxl/libxl_internal.c | 182 +++++++++++++++++++ tools/libxl/libxl_internal.h | 105 +++++++++-- tools/libxl/libxl_qmp.c | 89 ++++----- 5 files changed, 590 insertions(+), 130 deletions(-) -- Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |