[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/3] Revert xc cpupool retries and document anomaly



This small series is part of some cleanup for the CPUPOOL RMCPU EBUSY
problem.

The first patch in this series, a libxc patch, reverts what I think is
simply a mistake.  The second is trivial adds some annotations for the
benefit of the hypercall API HTML docs generator.  The third patch
provides an attempt at documenting the RMCPU EBUSY problem.


It is disappointing that such a strange and hard-to-use interface
should be committed to the hypervisor.  A contributing factor seem to
be a failure to realise that the behaviour should be documented.  An
proper attempt to document it would probably have resulted in
improvements to the interface.

I think that (unless suppositions not marked with `xxx' are false)
this doc comment is an improvement over leaving the anomaly totally
undocumented.  I therefore think that this patch is appropriate to
commit right away (!)

Followups, or review comments, which provide more accurate or more
precise wording, would of course be welcome.

I also think that there is room for improvement in the presented
hypercall semantics, as I understand them.


I would have liked to submit a fourth patch which made xl print the
unhelpful message about trying to re-add and re-remove the cpu, only
when it was relevant.

Unfortunately because the hypervisor doesn't seem to distinguish the
troublesome cases, it is not possible for libxl to tell the
difference.  So it is not possible for libxl to return a different
error code which xl could use to tell the difference.

I would appreciate suggestions from hypervisor maintainers as to how
this could be improved.


Finally, sorry for not spotting these problems earlier.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.