[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/6] xen: modify memory ops to be NUMA-aware



* Ryan Harper <ryanh@xxxxxxxxxx> [2006-07-31 14:13]:
> >From [1]previous post:
> > This patch modifies three memory operations to be NUMA-aware:
> > 
> > increase_reservation
> > populate_physmap
> > memory_exchange
> > 
> > These three operations request memory from the domain heap and have been
> > modified to distribute the request across the physical cpus of the
> > target domain evenly.  This make memory local to the physical cpus
> > within the domain available for the guest.
> 
> Measuring the overhead has shown the distribution to be costly with at
> the current time, no specific benefit since the best case would be
> providing local memory in a multi-node guest environment.  As we
> currently don't export this virtual domain topology to Linux, it can't
> take advantage of the local allocations.  At this time, most domains
> created on NUMA machines will modify their config file parameters to
> ensure they fit within a single NUMA node and render the distribution
> code useless.  This patch removes the extra logic and uses domain's vcpu
> 0 processor as the parameter into the heap allocation function.
> 
> Now domains will use VCPU0 to pick which node to allocate memory from
> (using cpu_to_node mapping) and we don't pay for logic that won't be
> leveraged.
> 
> 
> [1] http://lists.xensource.com/archives/html/xen-devel/2006-07/msg00544.html

-no changes

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx


diffstat output:
 memory.c |   21 ++++++++++++++-------
 1 files changed, 14 insertions(+), 7 deletions(-)

Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx>
---
Make memory hypercalls NUMA-aware

diff -r fa87cea10778 xen/common/memory.c
--- a/xen/common/memory.c       Tue Aug 15 11:38:13 2006 -0500
+++ b/xen/common/memory.c       Tue Aug 15 11:40:17 2006 -0500
@@ -40,6 +40,8 @@ increase_reservation(
     struct page_info *page;
     unsigned long i;
     xen_pfn_t mfn;
+    /* use domain's first processor for locality parameter */
+    unsigned int cpu = d->vcpu[0]->processor;
 
     if ( !guest_handle_is_null(extent_list) &&
          !guest_handle_okay(extent_list, nr_extents) )
@@ -57,8 +59,8 @@ increase_reservation(
             return i;
         }
 
-        if ( unlikely((page = alloc_domheap_pages(
-            d, extent_order, memflags)) == NULL) )
+        if ( unlikely((page = __alloc_domheap_pages( d, cpu, 
+            extent_order, memflags )) == NULL) ) 
         {
             DPRINTK("Could not allocate order=%d extent: "
                     "id=%d memflags=%x (%ld of %d)\n",
@@ -91,6 +93,8 @@ populate_physmap(
     unsigned long i, j;
     xen_pfn_t gpfn;
     xen_pfn_t mfn;
+    /* use domain's first processor for locality parameter */
+    unsigned int cpu = d->vcpu[0]->processor;
 
     if ( !guest_handle_okay(extent_list, nr_extents) )
         return 0;
@@ -110,8 +114,8 @@ populate_physmap(
         if ( unlikely(__copy_from_guest_offset(&gpfn, extent_list, i, 1)) )
             goto out;
 
-        if ( unlikely((page = alloc_domheap_pages(
-            d, extent_order, memflags)) == NULL) )
+        if ( unlikely((page = __alloc_domheap_pages( d, cpu, 
+            extent_order, memflags )) == NULL) ) 
         {
             DPRINTK("Could not allocate order=%d extent: "
                     "id=%d memflags=%x (%ld of %d)\n",
@@ -293,7 +297,7 @@ memory_exchange(XEN_GUEST_HANDLE(xen_mem
     unsigned long in_chunk_order, out_chunk_order;
     xen_pfn_t     gpfn, gmfn, mfn;
     unsigned long i, j, k;
-    unsigned int  memflags = 0;
+    unsigned int  memflags = 0, cpu;
     long          rc = 0;
     struct domain *d;
     struct page_info *page;
@@ -367,6 +371,9 @@ memory_exchange(XEN_GUEST_HANDLE(xen_mem
     }
     d = current->domain;
 
+    /* use domain's first processor for locality parameter */
+    cpu = d->vcpu[0]->processor;
+
     for ( i = 0; i < (exch.in.nr_extents >> in_chunk_order); i++ )
     {
         if ( hypercall_preempt_check() )
@@ -412,8 +419,8 @@ memory_exchange(XEN_GUEST_HANDLE(xen_mem
         /* Allocate a chunk's worth of anonymous output pages. */
         for ( j = 0; j < (1UL << out_chunk_order); j++ )
         {
-            page = alloc_domheap_pages(
-                NULL, exch.out.extent_order, memflags);
+            page = __alloc_domheap_pages( NULL, cpu, 
+                  exch.out.extent_order, memflags);
             if ( unlikely(page == NULL) )
             {
                 rc = -ENOMEM;

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.