[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/6] x86/mem-paging: use guest handle for XENMEM_paging_op_prep



Hi Jan,

On 16/04/2020 16:46, Jan Beulich wrote:
While it should have been this way from the beginning, not doing so will
become an actual problem with PVH Dom0.

I think the current code is also buggy on PV dom0 because the buffer is not locked in memory. So you have no promise the buffer will be present when calling the hypercall.

The interface change is binary
compatible, but requires tools side producers to be re-built.

Drop the bogus/unnecessary page alignment restriction on the input
buffer at the same time.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
Is there really no way to avoid the buffer copying in libxc?

--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -26,15 +26,33 @@ static int xc_mem_paging_memop(xc_interf
                                 unsigned int op, uint64_t gfn, void *buffer)

NIT: As you switch the handle to use const, would it make to also make the buffer const?

  {
      xen_mem_paging_op_t mpo;
+    DECLARE_HYPERCALL_BOUNCE(buffer, XC_PAGE_SIZE,
+                             XC_HYPERCALL_BUFFER_BOUNCE_IN);
+    int rc;
memset(&mpo, 0, sizeof(mpo)); mpo.op = op;
      mpo.domain  = domain_id;
      mpo.gfn     = gfn;
-    mpo.buffer  = (unsigned long) buffer;
- return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
+    if ( buffer )
+    {
+        if ( xc_hypercall_bounce_pre(xch, buffer) )
+        {
+            PERROR("Could not bounce memory for XENMEM_paging_op %u", op);
+            return -1;
+        }
+
+        set_xen_guest_handle(mpo.buffer, buffer);
+    }
+
+    rc = do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
+
+    if ( buffer )
+        xc_hypercall_bounce_post(xch, buffer);
+
+    return rc;
  }
int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id,
@@ -92,28 +110,13 @@ int xc_mem_paging_prep(xc_interface *xch
  int xc_mem_paging_load(xc_interface *xch, uint32_t domain_id,
                         uint64_t gfn, void *buffer)
  {
-    int rc, old_errno;
-
      errno = EINVAL;
if ( !buffer )
          return -1;
- if ( ((unsigned long) buffer) & (XC_PAGE_SIZE - 1) )
-        return -1;
-
-    if ( mlock(buffer, XC_PAGE_SIZE) )
-        return -1;
-
-    rc = xc_mem_paging_memop(xch, domain_id,
-                             XENMEM_paging_op_prep,
-                             gfn, buffer);
-
-    old_errno = errno;
-    munlock(buffer, XC_PAGE_SIZE);
-    errno = old_errno;
-
-    return rc;
+    return xc_mem_paging_memop(xch, domain_id, XENMEM_paging_op_prep,
+                               gfn, buffer);
  }
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1779,7 +1779,8 @@ void p2m_mem_paging_populate(struct doma
   * mfn if populate was called for  gfn which was nominated but not evicted. In
   * this case only the p2mt needs to be forwarded.
   */
-int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer)
+int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l,
+                        XEN_GUEST_HANDLE_PARAM(const_uint8) buffer)

Shouldn't this technically be XEN_GUEST_HANDLE_64() to match the field?

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.