[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] [XEN] The shadow FAST_FAULT_PATH optimisation assumes that pages never



# HG changeset patch
# User ssmith@xxxxxxxxxxxxxxxxxxxxx
# Node ID 3e2b6365ba75f4756e4961f76239a82fe0b15f4a
# Parent  0bea8e77350892af409ccd29463eb22bf09cb9f3
[XEN] The shadow FAST_FAULT_PATH optimisation assumes that pages never
transition between mmio and RAM-backed.  This isn't true after a
an add_to_physmap memory op.  Fix this by just blowing the shadow tables
after every such operation; they're rare enough that the performance
hit is not a concern.

Signed-off-by: Steven Smith <sos22@xxxxxxxxx>
Acked-by: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>
---
 xen/arch/x86/mm.c               |   12 +++++++++++-
 xen/arch/x86/mm/shadow/common.c |    2 +-
 xen/include/asm-x86/shadow.h    |    3 +++
 3 files changed, 15 insertions(+), 2 deletions(-)

diff -r 0bea8e773508 -r 3e2b6365ba75 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c Tue Dec 05 17:01:34 2006 +0000
+++ b/xen/arch/x86/mm.c Mon Dec 11 11:16:29 2006 -0800
@@ -2968,7 +2968,17 @@ long arch_memory_op(int op, XEN_GUEST_HA
         guest_physmap_add_page(d, xatp.gpfn, mfn);
 
         UNLOCK_BIGLOCK(d);
-        
+
+        /* If we're doing FAST_FAULT_PATH, then shadow mode may have
+           cached the fact that this is an mmio region in the shadow
+           page tables.  Blow the tables away to remove the cache.
+           This is pretty heavy handed, but this is a rare operation
+           (it might happen a dozen times during boot and then never
+           again), so it doesn't matter too much. */
+        shadow_lock(d);
+        shadow_blow_tables(d);
+        shadow_unlock(d);
+
         put_domain(d);
 
         break;
diff -r 0bea8e773508 -r 3e2b6365ba75 xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c   Tue Dec 05 17:01:34 2006 +0000
+++ b/xen/arch/x86/mm/shadow/common.c   Mon Dec 11 11:16:29 2006 -0800
@@ -733,7 +733,7 @@ void shadow_prealloc(struct domain *d, u
 
 /* Deliberately free all the memory we can: this will tear down all of
  * this domain's shadows */
-static void shadow_blow_tables(struct domain *d) 
+void shadow_blow_tables(struct domain *d) 
 {
     struct list_head *l, *t;
     struct shadow_page_info *sp;
diff -r 0bea8e773508 -r 3e2b6365ba75 xen/include/asm-x86/shadow.h
--- a/xen/include/asm-x86/shadow.h      Tue Dec 05 17:01:34 2006 +0000
+++ b/xen/include/asm-x86/shadow.h      Mon Dec 11 11:16:29 2006 -0800
@@ -540,6 +540,9 @@ extern int shadow_remove_write_access(st
  * Returns non-zero if we need to flush TLBs. */
 extern int shadow_remove_all_mappings(struct vcpu *v, mfn_t target_mfn);
 
+/* Remove all mappings from the shadows. */
+extern void shadow_blow_tables(struct domain *d);
+
 void
 shadow_remove_all_shadows_and_parents(struct vcpu *v, mfn_t gmfn);
 /* This is a HVM page that we thing is no longer a pagetable.

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.