[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] slow xp hibernation revisited



On Sat, 4 Jun 2011, Keir Fraser wrote:
> Also, looking at qemu_map_cache() now, the early exit at the top of the
> function looks a bit bogus to me. It exits successfully if we hit the same
> address_index as last invocation, even though we might be hitting a
> different pfn within the indexed range, and a possibly invalid/unmapped pfn
> at that.
> 

Yes, I am afraid that you are right.
The mistake is memorizing the last address_index rather than
the last page address.
I'll submit a similar patch to upstream qemu.

---

mapcache: remember the last page address rather then the last address_index

A single address_index corresponds to multiple pages that might or
might not be mapped.
It is better to just remember the last page address for the sake of this
optimization, so that we are sure that it is mapped.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>


diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
index a353ee6..603a508 100644
--- a/hw/xen_machine_fv.c
+++ b/hw/xen_machine_fv.c
@@ -63,7 +63,7 @@ static unsigned long nr_buckets;
 TAILQ_HEAD(map_cache_head, map_cache_rev) locked_entries = 
TAILQ_HEAD_INITIALIZER(locked_entries);
 
 /* For most cases (>99.9%), the page address is the same. */
-static unsigned long last_address_index = ~0UL;
+static unsigned long last_address_page = ~0UL;
 static uint8_t      *last_address_vaddr;
 
 static int qemu_map_cache_init(void)
@@ -138,7 +138,7 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, 
uint8_t lock)
     unsigned long address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
     unsigned long address_offset = phys_addr & (MCACHE_BUCKET_SIZE-1);
 
-    if (address_index == last_address_index && !lock)
+    if ((phys_addr >> XC_PAGE_SHIFT) == last_address_page && !lock)
         return last_address_vaddr + address_offset;
 
     entry = &mapcache_entry[address_index % nr_buckets];
@@ -157,17 +157,17 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, 
uint8_t lock)
     }
 
     if (!test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping)) {
-        last_address_index = ~0UL;
+        last_address_page = ~0UL;
         return NULL;
     }
 
-    last_address_index = address_index;
+    last_address_page = phys_addr >> XC_PAGE_SHIFT;
     last_address_vaddr = entry->vaddr_base;
     if (lock) {
         struct map_cache_rev *reventry = qemu_mallocz(sizeof(struct 
map_cache_rev));
         entry->lock++;
         reventry->vaddr_req = last_address_vaddr + address_offset;
-        reventry->paddr_index = last_address_index;
+        reventry->paddr_index = address_index;
         TAILQ_INSERT_TAIL(&locked_entries, reventry, next);
     }
 
@@ -182,7 +182,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
     int found = 0;
     
     if (last_address_vaddr == buffer)
-        last_address_index =  ~0UL;
+        last_address_page =  ~0UL;
 
     TAILQ_FOREACH(reventry, &locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
@@ -252,7 +252,7 @@ void qemu_invalidate_map_cache(void)
         entry->vaddr_base  = NULL;
     }
 
-    last_address_index =  ~0UL;
+    last_address_page =  ~0UL;
     last_address_vaddr = NULL;
 
     mapcache_unlock();

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.