[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 2/3] libxl: update vcpus bitmap in retrieved guest config
On Mon, Jun 13, 2016 at 06:39:36PM +0100, Anthony PERARD wrote: > On Wed, Jun 08, 2016 at 03:28:45PM +0100, Wei Liu wrote: > > ... because the available vcpu bitmap can change during domain life time > > due to cpu hotplug and unplug. > > > > For QEMU upstream, we interrogate QEMU for the number of vcpus. For > > others, we look directly into xenstore for information. > > I tried to migrate a guest, and libxl abort in > libxl_retrieve_domain_configuration within the switch > (device_model_version). > > > > Reported-by: Jan Beulich <jbeulich@xxxxxxxx> > > Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx> > > --- > > tools/libxl/libxl.c | 87 > > +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 87 insertions(+) > > > > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c > > index 006b83f..02706ab 100644 > > --- a/tools/libxl/libxl.c > > +++ b/tools/libxl/libxl.c > > @@ -7222,6 +7222,53 @@ void libxl_mac_copy(libxl_ctx *ctx, libxl_mac *dst, > > libxl_mac *src) > > (*dst)[i] = (*src)[i]; > > } > > > > +static int libxl__update_avail_vcpus_qmp(libxl__gc *gc, uint32_t domid, > > + unsigned int max_vcpus, > > + libxl_bitmap *map) > > +{ > > + int rc; > > + > > + /* For QEMU upstream we always need to return the number > > + * of cpus present to QEMU whether they are online or not; > > + * otherwise QEMU won't accept the saved state. > > + */ > > + rc = libxl__qmp_query_cpus(gc, domid, map); > > + if (rc) { > > + LOG(ERROR, "fail to get number of cpus for domain %d", domid); > > + goto out; > > + } > > + > > + rc = 0; > > The value should already be 0 at this point. > I would like to keep this as-is because this is an idiom that is safer against further modification of this function. > > +out: > > + return rc; > > +} > > + > > +static int libxl__update_avail_vcpus_xenstore(libxl__gc *gc, uint32_t > > domid, > > + unsigned int max_vcpus, > > + libxl_bitmap *map) > > +{ > > + int rc; > > + unsigned int i; > > + const char *dompath; > > + > > + dompath = libxl__xs_get_dompath(gc, domid); > > + if (!dompath) { > > + rc = ERROR_FAIL; > > + goto out; > > + } > > + > > + for (i = 0; i < max_vcpus; i++) { > > + const char *path = GCSPRINTF("%s/cpu/%u/availability", dompath, i); > > + const char *content = libxl__xs_read(gc, XBT_NULL, path); > > + if (!strncmp(content, "online", strlen("online"))) > > I don't think strncmp is usefull here as one of the argument is a plain > string. One could just use strcmp? > Fine by me of course. > > + libxl_bitmap_set(map, i); > > + } > > + > > + rc = 0; > > +out: > > + return rc; > > +} > > + > > int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid, > > libxl_domain_config *d_config) > > { > > @@ -7270,6 +7317,46 @@ int libxl_retrieve_domain_configuration(libxl_ctx > > *ctx, uint32_t domid, > > libxl_dominfo_dispose(&info); > > } > > > > + /* VCPUs */ > > + { > > + libxl_bitmap *map = &d_config->b_info.avail_vcpus; > > + unsigned int max_vcpus = d_config->b_info.max_vcpus; > > + > > + libxl_bitmap_dispose(map); > > + libxl_bitmap_init(map); > > + libxl_bitmap_alloc(CTX, map, max_vcpus); > > + libxl_bitmap_set_none(map); > > + > > + switch (d_config->b_info.type) { > > + case LIBXL_DOMAIN_TYPE_HVM: > > + switch (d_config->b_info.device_model_version) { > > + case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN: > > + rc = libxl__update_avail_vcpus_qmp(gc, domid, > > + max_vcpus, map); > > + break; > > + case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL: > > + case LIBXL_DEVICE_MODEL_VERSION_NONE: > > + rc = libxl__update_avail_vcpus_xenstore(gc, domid, > > + max_vcpus, map); > > + break; > > + default: > > + abort(); > > Missing indentation for abort. > Will fix. > Also, that is where xl abort on migration. > Hmm... This means the device model version is not valid (unknown?). Can you paste in your guest config? Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |