[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/3] fix hypercall buffer locking in memory



On Linux systems hypercall buffers in user memory are allocated with
MAP_LOCKED attribute. Unfortunately that doesn't mean that the buffer
will always be accessible by the hypervisor, as the kernel might set
the PTE for the buffer to invalid or read only for short periods of
time, e.g. due to page migration or compaction.

This results in highly sporadic -EFAULT for hypercalls issued by Xen
tools.

Fix this problem by using a new device node of the Linux privcmd driver
for allocating hypercall buffers.

Add a fallback in case the Linux kernel doesn't support that new device
node by repeating the getpageframeinfo3 hypercall which until now has
been the only one to be observed suffering from that problem.

This series is meant to be included in 4.11, so it can right away have
my:

Release-acked-by: Juergen Gross <jgross@xxxxxxxx>


Juergen Gross (3):
  tools/libxencall: use hypercall buffer device if available
  tools/libxencalls: add new function to query hypercall buffer safety
  tools/libxc: retry hypercall in case of EFAULT

 tools/libs/call/Makefile          |  2 +-
 tools/libs/call/core.c            |  8 +++++++-
 tools/libs/call/freebsd.c         |  5 +++++
 tools/libs/call/include/xencall.h |  7 +++++++
 tools/libs/call/libxencall.map    |  5 +++++
 tools/libs/call/linux.c           | 34 ++++++++++++++++++++++++++++++++--
 tools/libs/call/minios.c          |  5 +++++
 tools/libs/call/netbsd.c          |  5 +++++
 tools/libs/call/private.h         |  1 +
 tools/libs/call/solaris.c         |  5 +++++
 tools/libxc/xc_private.c          |  2 +-
 tools/libxc/xc_private.h          | 24 +++++++++++++++++++++---
 12 files changed, 95 insertions(+), 8 deletions(-)

-- 
2.13.7


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.