[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86 hvm: Improve paging performance for 64b solaris guests



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1212658579 -3600
# Node ID 24c86abbb387c795118648821416848e66481ff8
# Parent  02132fc864b436336aca20cb4aee60112d0fd5a9
x86 hvm: Improve paging performance for 64b solaris guests

The following patch provides a 'fast-path' for sh_remove_write_access()
for 64 bit Solaris HVM guests. This provides a significant performance
boost for such guests; our testing shows a 200-400% improvement in
microbenchmarks such as fork(), exit(), etc...

From: Gary Pennington <Gary.Pennington@xxxxxxx>
Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
---
 xen/arch/x86/mm/shadow/common.c  |    5 +++++
 xen/arch/x86/mm/shadow/multi.c   |    2 ++
 xen/include/asm-x86/perfc_defn.h |    3 +--
 3 files changed, 8 insertions(+), 2 deletions(-)

diff -r 02132fc864b4 -r 24c86abbb387 xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c   Thu Jun 05 10:34:01 2008 +0100
+++ b/xen/arch/x86/mm/shadow/common.c   Thu Jun 05 10:36:19 2008 +0100
@@ -1738,6 +1738,11 @@ int sh_remove_write_access(struct vcpu *
             gfn = mfn_to_gfn(v->domain, gmfn); 
             GUESS(0xffff810000000000UL + (gfn << PAGE_SHIFT), 4); 
             GUESS(0x0000010000000000UL + (gfn << PAGE_SHIFT), 4); 
+            /*
+             * 64bit Solaris kernel page map at
+             * kpm_vbase; 0xfffffe0000000000UL
+             */
+            GUESS(0xfffffe0000000000UL + (gfn << PAGE_SHIFT), 4);
         }
 #endif /* CONFIG_PAGING_LEVELS >= 4 */
 
diff -r 02132fc864b4 -r 24c86abbb387 xen/arch/x86/mm/shadow/multi.c
--- a/xen/arch/x86/mm/shadow/multi.c    Thu Jun 05 10:34:01 2008 +0100
+++ b/xen/arch/x86/mm/shadow/multi.c    Thu Jun 05 10:36:19 2008 +0100
@@ -4007,7 +4007,9 @@ int sh_rm_write_access_from_l1(struct vc
     shadow_l1e_t *sl1e;
     int done = 0;
     int flags;
+#if SHADOW_OPTIMIZATIONS & SHOPT_WRITABLE_HEURISTIC 
     mfn_t base_sl1mfn = sl1mfn; /* Because sl1mfn changes in the foreach */
+#endif
     
     SHADOW_FOREACH_L1E(sl1mfn, sl1e, 0, done, 
     {
diff -r 02132fc864b4 -r 24c86abbb387 xen/include/asm-x86/perfc_defn.h
--- a/xen/include/asm-x86/perfc_defn.h  Thu Jun 05 10:34:01 2008 +0100
+++ b/xen/include/asm-x86/perfc_defn.h  Thu Jun 05 10:36:19 2008 +0100
@@ -77,8 +77,7 @@ PERFCOUNTER(shadow_writeable_h_1,  "shad
 PERFCOUNTER(shadow_writeable_h_1,  "shadow writeable: 32b w2k3")
 PERFCOUNTER(shadow_writeable_h_2,  "shadow writeable: 32pae w2k3")
 PERFCOUNTER(shadow_writeable_h_3,  "shadow writeable: 64b w2k3")
-PERFCOUNTER(shadow_writeable_h_4,  "shadow writeable: 32b linux low")
-PERFCOUNTER(shadow_writeable_h_5,  "shadow writeable: 32b linux high")
+PERFCOUNTER(shadow_writeable_h_4,  "shadow writeable: linux/solaris")
 PERFCOUNTER(shadow_writeable_bf,   "shadow writeable brute-force")
 PERFCOUNTER(shadow_mappings,       "shadow removes all mappings")
 PERFCOUNTER(shadow_mappings_bf,    "shadow rm-mappings brute-force")

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.