[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] More efficient TLB-flush filtering in alloc_heap_pages().


  • To: xen-changelog@xxxxxxxxxxxxxxxxxxx
  • From: Xen patchbot-unstable <patchbot@xxxxxxx>
  • Date: Tue, 16 Oct 2012 07:11:10 +0000
  • Delivery-date: Tue, 16 Oct 2012 07:11:45 +0000
  • List-id: "Change log for Mercurial \(receive only\)" <xen-changelog.lists.xen.org>

# HG changeset patch
# User Keir Fraser <keir@xxxxxxx>
# Date 1350315491 -3600
# Node ID 177fdda0be568ccdb62697b64aa64ee20bc55bee
# Parent  14e32621dbaf5b485b134ace4558e67c4c36e1ce
More efficient TLB-flush filtering in alloc_heap_pages().

Rather than per-cpu filtering for every page in a super-page
allocation, simply remember the most recent TLB timestamp across all
allocated pages, and filter on that, just once, at the end of the
function.

For large-CPU systems, doing 2MB allocations during domain creation,
this cuts down the domain creation time *massively*.

TODO: It may make sense to move the filtering out into some callers,
such as memory.c:populate_physmap() and
memory.c:increase_reservation(), so that the filtering can be moved
outside their loops, too.

Signed-off-by: Keir Fraser <keir@xxxxxxx>
---


diff -r 14e32621dbaf -r 177fdda0be56 xen/common/page_alloc.c
--- a/xen/common/page_alloc.c   Mon Oct 15 15:04:51 2012 +0200
+++ b/xen/common/page_alloc.c   Mon Oct 15 16:38:11 2012 +0100
@@ -414,9 +414,10 @@ static struct page_info *alloc_heap_page
     unsigned int first_node, i, j, zone = 0, nodemask_retry = 0;
     unsigned int node = (uint8_t)((memflags >> _MEMF_node) - 1);
     unsigned long request = 1UL << order;
-    cpumask_t mask;
     struct page_info *pg;
     nodemask_t nodemask = (d != NULL ) ? d->node_affinity : node_online_map;
+    bool_t need_tlbflush = 0;
+    uint32_t tlbflush_timestamp = 0;
 
     if ( node == NUMA_NO_NODE )
     {
@@ -530,22 +531,19 @@ static struct page_info *alloc_heap_page
     if ( d != NULL )
         d->last_alloc_node = node;
 
-    cpumask_clear(&mask);
-
     for ( i = 0; i < (1 << order); i++ )
     {
         /* Reference count must continuously be zero for free pages. */
         BUG_ON(pg[i].count_info != PGC_state_free);
         pg[i].count_info = PGC_state_inuse;
 
-        if ( pg[i].u.free.need_tlbflush )
+        if ( pg[i].u.free.need_tlbflush &&
+             (pg[i].tlbflush_timestamp <= tlbflush_current_time()) &&
+             (!need_tlbflush ||
+              (pg[i].tlbflush_timestamp > tlbflush_timestamp)) )
         {
-            /* Add in extra CPUs that need flushing because of this page. */
-            static cpumask_t extra_cpus_mask;
-
-            cpumask_andnot(&extra_cpus_mask, &cpu_online_map, &mask);
-            tlbflush_filter(extra_cpus_mask, pg[i].tlbflush_timestamp);
-            cpumask_or(&mask, &mask, &extra_cpus_mask);
+            need_tlbflush = 1;
+            tlbflush_timestamp = pg[i].tlbflush_timestamp;
         }
 
         /* Initialise fields which have other uses for free pages. */
@@ -555,10 +553,15 @@ static struct page_info *alloc_heap_page
 
     spin_unlock(&heap_lock);
 
-    if ( unlikely(!cpumask_empty(&mask)) )
+    if ( need_tlbflush )
     {
-        perfc_incr(need_flush_tlb_flush);
-        flush_tlb_mask(&mask);
+        cpumask_t mask = cpu_online_map;
+        tlbflush_filter(mask, tlbflush_timestamp);
+        if ( !cpumask_empty(&mask) )
+        {
+            perfc_incr(need_flush_tlb_flush);
+            flush_tlb_mask(&mask);
+        }
     }
 
     return pg;

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.