[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] x86/HVM: refine when to send mapcache invalidation request to qemu



For one it was wrong to send the request only upon a completed
hypercall: Even if only part of it completed before getting preempted,
invalidation ought to occur. Therefore fold the two return statements.

And then XENMEM_decrease_reservation isn't the only means by which pages
can get removed from a guest, yet all removals ought to be signaled to
qemu. Put setting of the flag into the central p2m_remove_page()
underlying all respective hypercalls.

Plus finally there's no point sending the request for the local domain
when the domain acted upon is a different one. If anything that domain's
qemu's mapcache may need invalidating, but it's unclear how useful this
would be: That remote domain may not execute hypercalls at all, and
hence may never make it to the point where the request actually gets
issued. I guess the assumption is that such manipulation is not supposed
to happen anymore once the guest has been started?

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
Putting the check in guest_physmap_remove_page() might also suffice,
but then a separate is_hvm_domain() would need adding again.

--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -31,7 +31,6 @@
 
 static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    const struct vcpu *curr = current;
     long rc;
 
     switch ( cmd & MEMOP_CMD_MASK )
@@ -41,14 +40,11 @@ static long hvm_memory_op(int cmd, XEN_G
         return -ENOSYS;
     }
 
-    if ( !curr->hcall_compat )
+    if ( !current->hcall_compat )
         rc = do_memory_op(cmd, arg);
     else
         rc = compat_memory_op(cmd, arg);
 
-    if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
-        curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
-
     return rc;
 }
 
@@ -326,14 +322,11 @@ int hvm_hypercall(struct cpu_user_regs *
 
     HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
 
-    if ( curr->hcall_preempted )
-        return HVM_HCALL_preempted;
-
     if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
          test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) )
         send_invalidate_req();
 
-    return HVM_HCALL_completed;
+    return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
 }
 
 /*
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -812,6 +812,9 @@ p2m_remove_page(struct p2m_domain *p2m,
         }
     }
 
+    if ( p2m->domain == current->domain )
+        p2m->domain->arch.hvm.qemu_mapcache_invalidate = true;
+
     return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
                          p2m->default_access);
 }



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.