[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-4.1-testing] More efficient TLB-flush filtering in alloc_heap_pages().


  • To: xen-changelog@xxxxxxxxxxxxxxxxxxx
  • From: Xen patchbot-4.1-testing <patchbot@xxxxxxx>
  • Date: Mon, 29 Oct 2012 23:55:13 +0000
  • Delivery-date: Mon, 29 Oct 2012 23:55:21 +0000
  • List-id: "Change log for Mercurial \(receive only\)" <xen-changelog.lists.xen.org>

# HG changeset patch
# User Keir Fraser <keir@xxxxxxx>
# Date 1351497712 -3600
# Node ID a18ca08c475e020da935c2024cebc2d8b97074cb
# Parent  de551622e12493ca8747a69d68cd08103411a220
More efficient TLB-flush filtering in alloc_heap_pages().

Rather than per-cpu filtering for every page in a super-page
allocation, simply remember the most recent TLB timestamp across all
allocated pages, and filter on that, just once, at the end of the
function.

For large-CPU systems, doing 2MB allocations during domain creation,
this cuts down the domain creation time *massively*.

TODO: It may make sense to move the filtering out into some callers,
such as memory.c:populate_physmap() and
memory.c:increase_reservation(), so that the filtering can be moved
outside their loops, too.

Signed-off-by: Keir Fraser <keir@xxxxxxx>
xen-unstable changeset: 26056:177fdda0be56
xen-unstable date: Mon Oct 15 15:38:11 UTC 2012
---


diff -r de551622e124 -r a18ca08c475e xen/common/page_alloc.c
--- a/xen/common/page_alloc.c   Mon Oct 29 09:01:14 2012 +0100
+++ b/xen/common/page_alloc.c   Mon Oct 29 09:01:52 2012 +0100
@@ -303,9 +303,10 @@ static struct page_info *alloc_heap_page
     unsigned int first_node, i, j, zone = 0, nodemask_retry = 0;
     unsigned int node = (uint8_t)((memflags >> _MEMF_node) - 1);
     unsigned long request = 1UL << order;
-    cpumask_t extra_cpus_mask, mask;
     struct page_info *pg;
     nodemask_t nodemask = (d != NULL ) ? d->node_affinity : node_online_map;
+    bool_t need_tlbflush = 0;
+    uint32_t tlbflush_timestamp = 0;
 
     if ( node == NUMA_NO_NODE )
     {
@@ -417,20 +418,19 @@ static struct page_info *alloc_heap_page
     if ( d != NULL )
         d->last_alloc_node = node;
 
-    cpus_clear(mask);
-
     for ( i = 0; i < (1 << order); i++ )
     {
         /* Reference count must continuously be zero for free pages. */
         BUG_ON(pg[i].count_info != PGC_state_free);
         pg[i].count_info = PGC_state_inuse;
 
-        if ( pg[i].u.free.need_tlbflush )
+        if ( pg[i].u.free.need_tlbflush &&
+             (pg[i].tlbflush_timestamp <= tlbflush_current_time()) &&
+             (!need_tlbflush ||
+              (pg[i].tlbflush_timestamp > tlbflush_timestamp)) )
         {
-            /* Add in extra CPUs that need flushing because of this page. */
-            cpus_andnot(extra_cpus_mask, cpu_online_map, mask);
-            tlbflush_filter(extra_cpus_mask, pg[i].tlbflush_timestamp);
-            cpus_or(mask, mask, extra_cpus_mask);
+            need_tlbflush = 1;
+            tlbflush_timestamp = pg[i].tlbflush_timestamp;
         }
 
         /* Initialise fields which have other uses for free pages. */
@@ -440,10 +440,15 @@ static struct page_info *alloc_heap_page
 
     spin_unlock(&heap_lock);
 
-    if ( unlikely(!cpus_empty(mask)) )
+    if ( need_tlbflush )
     {
-        perfc_incr(need_flush_tlb_flush);
-        flush_tlb_mask(&mask);
+        cpumask_t mask = cpu_online_map;
+        tlbflush_filter(mask, tlbflush_timestamp);
+        if ( !cpus_empty(mask) )
+        {
+            perfc_incr(need_flush_tlb_flush);
+            flush_tlb_mask(&mask);
+        }
     }
 
     return pg;

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.