[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/6] xen: modify memory ops to be NUMA-aware
* Ryan Harper <ryanh@xxxxxxxxxx> [2006-07-11 10:52]: > This patch modifies three memory operations to be NUMA-aware: > > increase_reservation > populate_physmap > memory_exchange > > These three operations request memory from the domain heap and have been > modified to distribute the request across the physical cpus of the > target domain evenly. This make memory local to the physical cpus > within the domain available for the guest. Measuring the overhead has show the distribution to be costly with at the current time, not specific benefit since the best case would be providing local memory in a multi-node guest environment. As we currently don't export this virtual domain topology to Linux, it can't take advantage of the local allocations. At this time, most domains created on NUMA machines will modify their config file parameters to ensure they fit within a single NUMA node and render the distribution code useless. This patch removes the extra logic and uses domain's vcpu 0 processor as the parameter into the heap allocation function. Now domains will use VCPU0 to pick which node to allocate memory from (using cpu_to_node mapping) and we don't pay for logic that won't be leveraged. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ryanh@xxxxxxxxxx diffstat output: memory.c | 21 ++++++++++++++------- 1 files changed, 14 insertions(+), 7 deletions(-) Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx> --- # HG changeset patch # User Ryan Harper <ryanh@xxxxxxxxxx> # Node ID d5c77144ba21ab00f2cb92405e9f00a3e0951821 # Parent b64b6d6c0440adb7f0cb91c13ca60b0832966cf6 03 increase_reservation/pop_physmap/mem_exch diff -r b64b6d6c0440 -r d5c77144ba21 xen/common/memory.c --- a/xen/common/memory.c Sat Jul 15 07:08:39 2006 +++ b/xen/common/memory.c Sat Jul 15 07:24:04 2006 @@ -40,6 +40,8 @@ struct page_info *page; unsigned long i; xen_pfn_t mfn; + /* use domain's first processor for locality parameter */ + unsigned int cpu = d->vcpu[0]->processor; if ( !guest_handle_is_null(extent_list) && !guest_handle_okay(extent_list, nr_extents) ) @@ -57,8 +59,8 @@ return i; } - if ( unlikely((page = alloc_domheap_pages( - d, extent_order, memflags)) == NULL) ) + if ( unlikely((page = __alloc_domheap_pages( d, cpu, + extent_order, memflags )) == NULL) ) { DPRINTK("Could not allocate order=%d extent: " "id=%d memflags=%x (%ld of %d)\n", @@ -91,6 +93,8 @@ unsigned long i, j; xen_pfn_t gpfn; xen_pfn_t mfn; + /* use domain's first processor for locality parameter */ + unsigned int cpu = d->vcpu[0]->processor; if ( !guest_handle_okay(extent_list, nr_extents) ) return 0; @@ -110,8 +114,8 @@ if ( unlikely(__copy_from_guest_offset(&gpfn, extent_list, i, 1)) ) goto out; - if ( unlikely((page = alloc_domheap_pages( - d, extent_order, memflags)) == NULL) ) + if ( unlikely((page = __alloc_domheap_pages( d, cpu, + extent_order, memflags )) == NULL) ) { DPRINTK("Could not allocate order=%d extent: " "id=%d memflags=%x (%ld of %d)\n", @@ -293,7 +297,7 @@ unsigned long in_chunk_order, out_chunk_order; xen_pfn_t gpfn, gmfn, mfn; unsigned long i, j, k; - unsigned int memflags = 0; + unsigned int memflags = 0, cpu; long rc = 0; struct domain *d; struct page_info *page; @@ -367,6 +371,9 @@ } d = current->domain; + /* use domain's first processor for locality parameter */ + cpu = d->vcpu[0]->processor; + for ( i = 0; i < (exch.in.nr_extents >> in_chunk_order); i++ ) { if ( hypercall_preempt_check() ) @@ -412,8 +419,8 @@ /* Allocate a chunk's worth of anonymous output pages. */ for ( j = 0; j < (1UL << out_chunk_order); j++ ) { - page = alloc_domheap_pages( - NULL, exch.out.extent_order, memflags); + page = __alloc_domheap_pages( NULL, cpu, + exch.out.extent_order, memflags); if ( unlikely(page == NULL) ) { rc = -ENOMEM; _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |