[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Cpu pools discussion



On Tue, Jul 28, 2009 at 2:31 PM, Tim Deegan<Tim.Deegan@xxxxxxxxxx> wrote:
> At 14:24 +0100 on 28 Jul (1248791073), Juergen Gross wrote:
>> > Does strict partitioning of CPUs like this satisfy everyone's
>> > requirements?  Bearing in mind that
>> >
>> >  - It's not work-conserving, i.e. it doesn't allow best-effort
>> >    scheduling of pool A's vCPUs on the idle CPUs of pool B.
>> >
>> >  - It restricts the maximum useful number of vCPUs per guest to the size
>> >    of a pool rather than the size of the machine.
>> >
>> >  - dom0 would be restricted to a subset of CPUs.  That seems OK to me
>> >    but occasionally people talk about having dom0's vCPUs pinned 1-1 on
>> >    the physical CPUs.
>>
>> You don't have to define other pools. You can just live with the default pool
>> extended to all cpus and everything is as today.
>
> Yep, all I'm saying is you can't do both.  If the people who want this
> feature (so far I count two of you) want to do both, then this
> solution's good not enough, and we should think about that before going
> ahead with it.

Yes, if you have more than one pool, then dom0 can't run on all cpus;
but it can still run with dom0's vcpus pinned 1-1 on the physical cpus
in its pool.

I'm not sure why someone who wants to partition a machine would
simultaneously want dom0 to run across all cpus...

As Juergen says, for people who don't use the feature, it shouldn't
have any real effect.  The patch is pretty straightforward, except for
the "continue_hypercall_on_cpu()" bit.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.