[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/5] libxl: update vcpus bitmap in retrieved guest config



On Fri, Jul 08, 2016 at 06:44:28PM +0100, Ian Jackson wrote:
> Wei Liu writes ("[PATCH v3 4/5] libxl: update vcpus bitmap in retrieved guest 
> config"):
> > ... because the available vcpu bitmap can change during domain life time
> > due to cpu hotplug and unplug.
> > 
> > For QEMU upstream, we interrogate QEMU for the number of vcpus. For
> > others, we look directly into xenstore for information.
> ...
> > +static int libxl__update_avail_vcpus_qmp(libxl__gc *gc, uint32_t domid,
> > +                                         unsigned int max_vcpus,
> > +                                         libxl_bitmap *map)
> > +{
> > +    int rc;
> > +
> > +    /* For QEMU upstream we always need to return the number
> > +     * of cpus present to QEMU whether they are online or not;
> > +     * otherwise QEMU won't accept the saved state.
> 
> This comment is confusing to me.  Perhaps `return' should read
> `provide' ?  But the connection between the body of this function and
> the comment is still obscure.
> 

"provide" is ok.

To avoid confusion the comment can be moved a few line above to describe
the function, not the code snippet. Does that work better?

> > +static int libxl__update_avail_vcpus_xenstore(libxl__gc *gc, uint32_t 
> > domid,
> > +                                              unsigned int max_vcpus,
> > +                                              libxl_bitmap *map)
> > +{
> > +    int rc;
> > +    unsigned int i;
> > +    const char *dompath;
> ...
> > +    for (i = 0; i < max_vcpus; i++) {
> > +        const char *path = GCSPRINTF("%s/cpu/%u/availability", dompath, i);
> > +        const char *content = libxl__xs_read(gc, XBT_NULL, path);
> > +        if (!strcmp(content, "online"))
> > +            libxl_bitmap_set(map, i);
> 
> Firstly, this should be libxl__xs_read_checked.  Secondly if "content"
> is NULL, this will crash.
> 

Will fix.

> > @@ -7294,6 +7341,55 @@ int libxl_retrieve_domain_configuration(libxl_ctx 
> > *ctx, uint32_t domid,
> >          libxl_dominfo_dispose(&info);
> >      }
> >  
> > +    /* VCPUs */
> > +    {
> > +        libxl_bitmap *map = &d_config->b_info.avail_vcpus;
> > +        unsigned int max_vcpus = d_config->b_info.max_vcpus;
> > +        libxl_device_model_version version;
> > +
> > +        libxl_bitmap_dispose(map);
> > +        libxl_bitmap_init(map);
> > +        libxl_bitmap_alloc(CTX, map, max_vcpus);
> > +        libxl_bitmap_set_none(map);
> 
> I see that this function already trashes the domain config when it
> fails (leaving it up to the caller to free the partial config) but
> that this is not documented :-/.

d_config is an out parameter. User can't expect the returned d_config to
be valid if this API returns error state. And user is still responsible
for calling _init before calling this API and _dispose afterwards. So
this still fits into how we expect libxl types to be used. Nothing new
needs to be documented.

All other fields are already handled in this way.

> 
> > +        /* If the user did not explicitly specify a device model
> > +         * version, we need to use the same logic used during domain
> > +         * creation to derive the device model version.
> > +         */
> > +        version = d_config->b_info.device_model_version;
> > +        if (version == LIBXL_DEVICE_MODEL_VERSION_UNKNOWN)
> > +            version = libxl__get_device_model_version(gc, 
> > &d_config->b_info);
> 
> I think this is rather unfortunate.  Won't changing the default device
> model while the domain is running will cause this function to give
> wrong answers ?

I think saving the device model into the template should work. No need
to derive it during runtime.

Wei.

> 
> Thanks,
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.