[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] "cpus" config parameter broken?

> >> changing. The model
> >> I'm aiming for in Xen is to remember all the CPUs requested by the
> >> toolstack, but only schedule onto the subset that are
> >> actually online right
> >> now (obviously). The implementation of this is of course
> >> quite simple given
> >> the CPU hotplug is not supported right now.
> >
> > Agreed, but even with CPU hotplug there will be some max_pcpu value
> > on any given machine.  That's why I said "non-existent processor"
> > in the proposal even though you said "offline processor".
> You mean CPUs beyond NR_CPUS? All the cpumask iterators are 
> careful not to
> return values beyond NR_CPUS, regardless of what stray bits 
> lie beyond that
> range in the longword bitmap.

I see... you are allowing for any future box to grow to NR_CPUS
and I am assuming that, even with future hot-add processors,
Xen will be told by the box the maximum number of processors
that will ever be online (call this max_pcpu), and that max_pcpu
is probably less than NR_CPUS.  So for these NR_CPUS-max_pcpu
processors that are "non-existent" (and especially for the
foreseeable future on the vast majority of machines for which
max_pcpu=npcpu=constant and ncpu << NR_CPUS), trying to set
bits for non-existent processors should not be silently ignored
and discarded, but should either be entirely
disallowed or, at least, should be retained and ignored.
I would propose "disallowed" for n > max_pcpu and retained
and ignored for online_pcpu < n < max_pcpu.

A related aside, for either model for hot-add (yours or mine),
the current modulo mechanism in xm_vcpu_pin is not scaleable
and imho should be removed now as well before anybody comes to
depend on it.

And lastly, this hot-add discussion reinforces in my mind the
difference between affinity and restriction (and pinning) which
are all muddled in the current hypervisor and tools.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.