[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 00/12] cpumask handling scalability improvements
On 20/10/2011 14:36, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > This patch set makes some first steps towards eliminating the old cpumask > accessors, replacing them by such that don't require the full NR_CPUS > bits to be allocated (which obviously can be pretty wasteful when > NR_CPUS is high, but the actual number is low or moderate). > > 01: introduce and use nr_cpu_ids and nr_cpumask_bits > 02: eliminate cpumask accessors referencing NR_CPUS > 03: eliminate direct assignments of CPU masks > 04: x86: allocate IRQ actions' cpu_eoi_map dynamically > 05: allocate CPU sibling and core maps dynamically 01-05/07-12: Acked-by: Keir Fraser <keir@xxxxxxx> > 06: allow efficient allocation of multiple CPU masks at once Not this one. -- Keir > One reason I put the following ones together was to reduce the > differences between the disassembly of hypervisors built for 4095 > and 2047 CPUs, which I looked at to determine the places where > cpumask_t variables get copied without using cpumask_copy() (a > job where grep is of no help). Hence consider these patch optional, > but recommended. > > 07: cpufreq: allocate CPU masks dynamically > 08: x86/p2m: allocate CPU masks dynamically > 09: cpupools: allocate CPU masks dynamically > 10: credit: allocate CPU masks dynamically > 11: x86/hpet: allocate CPU masks dynamically > 12: cpumask <=> xenctl_cpumap: allocate CPU masks and byte maps dynamically > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |