[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Cpu pools discussion
Tim Deegan wrote: > At 13:50 +0100 on 28 Jul (1248789008), George Dunlap wrote: >> On Tue, Jul 28, 2009 at 11:15 AM, Juergen >> Gross<juergen.gross@xxxxxxxxxxxxxx> wrote: >>> Tim Deegan wrote: >>>> That's easily done by setting affinity masks in the tools, without >>>> needing any mechanism in Xen. >>> More or less. >>> You have to set the affinity masks for ALL domains to avoid scheduling on >>> the >>> "special" cpus. > > Bah. You have to set the CPU pool of all domains to achieve the same > thing; in any case this kind of thing is what toolstacks are good at. :) No. If I have a dedicated pool for my "special domain" and all other domains are running in the default pool 0, I only have to set the pool of my special domain. Nothing else. > >>> You won't have reliable scheduling weights any more. > > That's a much more interesting argument. It seems to me that in this > simple case the scheduling weights will work out OK, but I can see that > in the general case it gets entertaining. Even in the relatively simple case of 2 disjunct subsets of domains/cpus (e.g. 2 domains on cpu 0+1 and 2 domains on cpu 2+3) the consumed time of the domains does not reflect their weights correctly. > >> Given that people want to partition a machine, I think cpu pools makes >> the most sense: >> * From a user perspective it's easier; no need to pin every VM, simply >> assign which pool it starts in > > I'll say it again because I think it's important: policy belongs in the > tools. User-friendly abstractions don't have to extend into the > hypervisor interfaces unless... > >> * From a scheduler perspective, it makes thinking about the algorithms >> easier. It's OK to build in the assumption that each VM can run >> anywhere. Other than partitioning, there's no real need to adjust the >> scheduling algorithm to do it. > > ...unless there's a benefit to keeping the hypervisor simple. Which > this certainly looks like. > > Does strict partitioning of CPUs like this satisfy everyone's > requirements? Bearing in mind that > > - It's not work-conserving, i.e. it doesn't allow best-effort > scheduling of pool A's vCPUs on the idle CPUs of pool B. > > - It restricts the maximum useful number of vCPUs per guest to the size > of a pool rather than the size of the machine. > > - dom0 would be restricted to a subset of CPUs. That seems OK to me > but occasionally people talk about having dom0's vCPUs pinned 1-1 on > the physical CPUs. You don't have to define other pools. You can just live with the default pool extended to all cpus and everything is as today. Pinning is still working in each pool as today. If a user has domains with different scheduling requirements (e.g. sedf and credit are to be used) he can use one partitioned machine instead two dedicated machines. And he can shift resources between the domains (e.g. devices, memory, single cores or even threads). He can't do that without pools today. With pools you have more possibilities without losing any function you have today. The only restriction is that you might not be able to use ALL features together with pools (e.g. complete load balancing), but the alternative would be to either lose some other functionality (scheduling weights) or to use different machines which won't give you load balancing either. Juergen -- Juergen Gross Principal Developer Operating Systems TSP ES&S SWE OS6 Telephone: +49 (0) 89 636 47950 Fujitsu Technolgy Solutions e-mail: juergen.gross@xxxxxxxxxxxxxx Otto-Hahn-Ring 6 Internet: ts.fujitsu.com D-81739 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |