[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xen hypercall API/ABI problems


While attempting to teach a hypercall-aware valgrind about enough
hypercalls to allow it to introspect HVM domain migration I came across
some systemic problems with certain hypercalls, particularly with migrate.

Here is the example of XENMEM_maximum_ram_page, but it is not alone as
far as this goes.

In Xen, it is defined as


 * Returns the maximum machine frame number of mapped RAM in this

 * This command always succeeds (it never returns an error

 * arg ==


In memory.c, there is a possible unsigned->signed conversion error from
max_pages to rc.
In compat/memory.c, there is a long->int truncation error for compat
hypercalls, although newer versions of Xen cap this at INT_{MIN,MAX}

In the privcmd driver passes the hypercall rc through as the return from
the ioctl handler, containing a possible long->int truncation error.

From libxc, the do_memory_op() is expected -errno style error handling,
but does not enforce it.  There is however a possible int->long
extension issue with xc_maximum_ram_page().

The value from this is then stuffed into unsigned long minfo->max_mfn
and immediately used in try to map the M2P table.

From the work with XSA-55, we have already identified that the error
handling and propagation in libxc leaves a lot to be desired.  However,
the hypervisor side of things is just as problematic.

What policy do we have about deprecating hypercall interfaces and
introducing newer ones?  At a minimum, all hypercalls should be using
-errno style errors, with a possibility of returning 0 to LONG_MAX as well.

I realise that simply changing the hypercalls in place is not possible. 
Would it be acceptable to have a step change across a Xen version (say
early in 4.4) where consumers of the public interface would have to make
use of -DXEN_LEGACY_UNSAFE_HYPERCALLS (or equivalent) in an attempt to
move them forward with the API ?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.