[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] xen, pod: Try to reclaim superpages when ballooning down


  • To: xen-changelog@xxxxxxxxxxxxxxxxxxx
  • From: Xen patchbot-unstable <patchbot@xxxxxxx>
  • Date: Wed, 04 Jul 2012 04:11:13 +0000
  • Delivery-date: Wed, 04 Jul 2012 04:11:19 +0000
  • List-id: "Change log for Mercurial \(receive only\)" <xen-changelog.lists.xen.org>

# HG changeset patch
# User George Dunlap <george.dunlap@xxxxxxxxxxxxx>
# Date 1340893080 -3600
# Node ID 0ff7269bbcedc22db1f48661dd52355081c19eaf
# Parent  ae7d717abb7c28357c37b3a1b1a3b685acf445de
xen,pod: Try to reclaim superpages when ballooning down

Windows balloon drivers can typically only get 4k pages from the kernel,
and so hand them back at that level.  Try to regain superpages by checking
the superpage frame that the 4k page is in to see if we can reclaim the whole
thing for the PoD cache.

This also modifies p2m_pod_zero_check_superpage() to return SUPERPAGE_PAGES on
success.

v2:
 - Rewritten to simply to the check as in demand-fault case, without needing
   to know that the p2m entry is a superpage.
 - Also, took out the re-writing of the reclaim loop, leaving it optimized for
   4k pages (by far the most common case), and simplifying the patch.
v3:
 - Add SoB

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Acked-by: Tim Deegan <tim@xxxxxxx>
Committed-by: Tim Deegan <tim@xxxxxxx>
---


diff -r ae7d717abb7c -r 0ff7269bbced xen/arch/x86/mm/p2m-pod.c
--- a/xen/arch/x86/mm/p2m-pod.c Thu Jun 28 16:11:59 2012 +0100
+++ b/xen/arch/x86/mm/p2m-pod.c Thu Jun 28 15:18:00 2012 +0100
@@ -488,6 +488,10 @@ p2m_pod_offline_or_broken_replace(struct
     return;
 }
 
+static int
+p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn);
+
+
 /* This function is needed for two reasons:
  * + To properly handle clearing of PoD entries
  * + To "steal back" memory being freed for the PoD cache, rather than
@@ -505,8 +509,8 @@ p2m_pod_decrease_reservation(struct doma
     int i;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-    int steal_for_cache = 0;
-    int pod = 0, nonpod = 0, ram = 0;
+    int steal_for_cache;
+    int pod, nonpod, ram;
 
     gfn_lock(p2m, gpfn, order);
     pod_lock(p2m);    
@@ -516,13 +520,15 @@ p2m_pod_decrease_reservation(struct doma
     if ( p2m->pod.entry_count == 0 )
         goto out_unlock;
 
+    if ( unlikely(d->is_dying) )
+        goto out_unlock;
+
+recount:
+    pod = nonpod = ram = 0;
+
     /* Figure out if we need to steal some freed memory for our cache */
     steal_for_cache =  ( p2m->pod.entry_count > p2m->pod.count );
 
-    if ( unlikely(d->is_dying) )
-        goto out_unlock;
-
-    /* See what's in here. */
     /* FIXME: Add contiguous; query for PSE entries? */
     for ( i=0; i<(1<<order); i++)
     {
@@ -556,7 +562,16 @@ p2m_pod_decrease_reservation(struct doma
         goto out_entry_check;
     }
 
-    /* FIXME: Steal contig 2-meg regions for cache */
+    /* Try to grab entire superpages if possible.  Since the common case is 
for drivers
+     * to pass back singleton pages, see if we can take the whole page back 
and mark the
+     * rest PoD. */
+    if ( steal_for_cache
+         && p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES-1)))
+    {
+        /* Since order may be arbitrary, we may have taken more or less
+         * than we were actually asked to; so just re-count from scratch */
+        goto recount;
+    }
 
     /* Process as long as:
      * + There are PoD entries to handle, or
@@ -758,6 +773,8 @@ p2m_pod_zero_check_superpage(struct p2m_
     p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M);
     p2m->pod.entry_count += SUPERPAGE_PAGES;
 
+    ret = SUPERPAGE_PAGES;
+
 out_reset:
     if ( reset )
         set_p2m_entry(p2m, gfn, mfn0, 9, type0, p2m->default_access);

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.