[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 06/17] libxl: convert libxl__device_disk_local_attach to an async op



Ian Jackson wrote:
>> Ian Jackson wrote:
>>>> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
>>> ...
>>>> @@ -2191,6 +2202,7 @@ char * libxl__device_disk_local_attach(libxl__gc *gc,
>>>>              default:
>>>>                  LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
>>>>                             "unrecognized disk format: %d", disk->format);
>>>> +                rc = ERROR_FAIL;
>>>>                  break;
>>> Why `break' and not `goto out' ?  (Here and later.)
>>>
>>> Perhaps this would be clear from context.
>> This is part of the original code, and I don't think it's a good idea to
>> add more patches to my series, if we want to get them in for 4.2.
> 
> Perhaps this would have been clearer in context.  Can I have a git
> branch to look at next time pretty please ? :-)  (I see you're working
> on that, great, thanks.)

Yes, I will try to send a pull request later.

>>>> +static void bootloader_finished_cb(libxl__egc *egc,
>>>> +                                   libxl__disk_local_state *dls,
>>>> +                                   int rc)
>>>> +{
>>>> +    STATE_AO_GC(dls->ao);
>>>> +    libxl__bootloader_state *bl = CONTAINER_OF(dls, *bl, dls);
>>>> +
>>>> +    free(bl->dls.diskpath);
>>>> +    bl->dls.diskpath = 0;
>>> Surely that isn't right ?  We should let the local detach machinery
>>> free this.  The doc comment in the dls struct definition doesn't say
>>> we're to mess with it like this.
>> I've changed that so the free is done inside the callback of
>> libxl__device_disk_local_initiate_detach.
> 
> Is the diskpath not from the gc ?  If it isn't then the memory
> lifetime and ownership needs to be documented by the variable
> definition (ie, the struct member definition).

No, and frankly I can't find any reason why it shouldn't be allocated
from the gc. The local attach/detach functions are internal to libxl,
and if the resulting diskpath has to survive the gc I would say this is
a problem of the caller.

>>>> +static void bootloader_disk_attached_cb(libxl__egc *egc,
>>>> +                                        libxl__disk_local_state *dls,
>>>> +                                        int rc)
>>>> +{
>>>> +    STATE_AO_GC(dls->ao);
>>>> +    libxl__bootloader_state *bl = CONTAINER_OF(dls, *bl, dls);
>>>> +    const libxl_domain_build_info *info = bl->info;
>>>> +    const char *bootloader;
>>>> +
>>>> +    if (rc) {
>>>> +        LOG(ERROR, "failed to attach local disk for bootloader 
>>>> execution");
>>>> +        dls->callback = bootloader_disk_failed_cb;
>>>> +        libxl__device_disk_local_initiate_detach(egc, dls);
>>>> +        return;
>>> So in line with my comments about the possible states of a dls, I
>>> think this recovery should be done by the local attach machinery, not
>>> left to the caller like this.
>> Ok, I agree this is not optimal, but I would also argue that it has the
>> same behaviour as the previous code. I will fix this by calling the
>> detach function in the attach callback if the attach failed.
> 
> I'm not sure what you mean by "I will fix this by ...", because AFAICT
> you are already calling "the detach function"
> (libxl__device_disk_local_initiate_detach) in "the attach callback"
> (bootloader_disk_attached_cb).

Yes, but this is in the attach callback provided by the user, which
means we call the user callback with an unknown disk state.

Now I've fixed this and I'm calling detach internally, so the user
doesn't have to deal with this. If the local attach fails, the
bootloader just has to exit.

> Anyway, I don't really mind whether the disk local attach exposes this
> Error state but if you do expose it you should document it.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.