[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] tools/libxl: Fixes to stream v2 task joining logic

On Thu, Jul 23, 2015 at 12:09:38PM +0100, Andrew Cooper wrote:
> During review of the libxl migration v2 series, I changes the task
> joining logic, but clearly didn't think the result through
> properly. This patch fixes several errors.
> 1) Do not call check_all_finished() in the success cases of
> libxl__xc_domain_{save,restore}_done().  It serves no specific purpose
> as the save helper state will report itself as inactive by this point,
> and avoids triggering a second stream->completion_callback() in the case
> that write_toolstack_record()/stream_continue() record errors
> synchronously themselves.
> 2) Only ever set stream->rc in stream_complete().  The first version of
> the migration v2 series had separate rc and joined_rc parameters, where
> this logic worked.  However when combining the two, the teardown path
> fails to trigger if stream_done() records stream->rc itself.  A side
> effect of this is that stream_done() needs to take an rc parameter.
> 3) Avoid stacking of check_all_finished() via synchronous teardown of
> tasks.  If the _abort() functions call back synchronously,
> stream->completion_callback() ends up getting called twice, as first and
> last check_all_finished() frames observe each task being finished.
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
> CC: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
> ---
> I found this while working to fix the toolstack record issue, but am
> posting this bugfix ahead of the other work as OSSTest has tripped over
> the issue.

This change itself doesn't seem to have anything to do with libxc. In
OSSTest the error that triggers this knock on effect is the failure of
xc_map_foreign_bulk. Does that mean this patch only fix half of the
problem seen in OSSTest?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.