[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] 0/7 xen: Add basic NUMA support

* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-12-16 19:28]:
> > The patchset will add basic NUMA support to Xen (hypervisor 
> > only).  
> I think we need a lot more discussion on this -- your approach differs
> from what we've previously discussed on the list. We need a session at
> the Jan summit.


> > Using this information, we also modified the page allocator 
> > to provide a simple NUMA-aware API.  The modified allocator 
> > will attempt to find pages local to the cpu where possible, 
> > but will fall back on using memory that is of the requested 
> > size rather than fragmenting larger contiguous chunks to find 
> > local pages.  We expect to tune this algorithm in the future 
> > after further study.
> Personally, I think we should have separate budy allocators for each of
> the zones; much simpler and faster in the common case.

I'm not sure how having multiple buddy allocators helps one choose
memory local to a node.  Do you mean to have a buddy allocator per node?

> > We also modified Xen's increase_reservation memory op to 
> > balance memory distribution across the vcpus in use by a 
> > domain.  Relying on previous patches which have already been 
> > committed to xen-unstable, a guest can be constructed such 
> > that its entire memory is contained within a specific NUMA node.
> This makes sense for 1 vcpu guests, but for multi vcpu guests this needs
> way more discussion. How do we expose the (potentially dynamic) mapping
> of vcpus to nodes? How do we expose the different memory zones to
> guests? How does Linux make use of this information? This is a can of
> worms, definitely phase 2. 

I believe this makes sense for multi-vcpu guests as currently the vcpu
to cpu mapping is known at domain construction time and prior to memory
allocation.  The dynamic case requires some thought as we don't want to
spread memory around, unplug two or three vcpus and potentially incur a
large number of misses because the remaining vcpus are not local to all
the domains memory.

The phase two plan is to provide virtual SRAT and SLIT tables to the
guests to leverage existing Linux NUMA code.  Lots to discuss here.

> If only we had an x445 to be able to work on these patches :)


Thanks for the feedback.

Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.