[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: Still TODO for 4.2? xl domain numa memory allocation vs xm/xend



On Fri, 2012-01-20 at 16:43 +0000, George Dunlap wrote:
> On Fri, 2012-01-20 at 16:39 +0000, Ian Campbell wrote:
> > On Fri, 2012-01-20 at 16:31 +0000, George Dunlap wrote:
> > > On Fri, 2012-01-20 at 16:28 +0000, Ian Campbell wrote:
> > > > On Fri, 2012-01-20 at 16:21 +0000, Ian Campbell wrote:
> > > > > cpupools don't seem to do this, I don't know if that is expected or 
> > > > > not.
> > > > 
> > > > Right, so cpupools do not appear to set the vcpu affinity, at least not
> > > > at the level where it effects memory allocation. However both
> > > >         pool="Pool-node0" cpus="0-7"
> > > > and
> > > >         pool="Pool-node1" cpus="8-15"
> > > > work as expected on a system with 8 cpus per node.
> > > > 
> > > > Should something be doing this pinning automatically?
> > > 
> > > It seems like it would be useful; But then we have the issue of, if a vm
> > > was pinned to cpus 0-3 of Pool-node0, and you move it to Pool-node1,
> > > what do you do?
> > 
> > I've no idea, it's not clear to me now what the semantics of cpupools
> > are if they don't restrict the VCPU affinity like I previously assumed.
> 
> Well, it does restrict what cpus the VM will run on; the effective
> affinity will be the union of the pool cpus and the vcpu affinity.

Ah, right.

I confused myself into thinking that cpupools ~= NUMA because I've only
used cpupool-numa-split but I can see that you might also divide your
cpus up some other way.

Should that same union be used for d->node_affinity though? It seems
like it would make sense.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.