[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] x86: use cpumask_any() in mask-to-APIC-ID conversions
commit 105ee865be224999e301b4303c740c1143b67b1d Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Fri Aug 23 15:04:17 2013 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Fri Aug 23 15:04:17 2013 +0200 x86: use cpumask_any() in mask-to-APIC-ID conversions This is to avoid picking CPU0 for almost any such operation, resulting in very uneven distribution of interrupt load. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Keir Fraser <keir@xxxxxxx> --- xen/arch/x86/genapic/delivery.c | 2 +- xen/arch/x86/genapic/x2apic.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/genapic/delivery.c b/xen/arch/x86/genapic/delivery.c index cdab333..94eb857 100644 --- a/xen/arch/x86/genapic/delivery.c +++ b/xen/arch/x86/genapic/delivery.c @@ -67,5 +67,5 @@ const cpumask_t *vector_allocation_cpumask_phys(int cpu) unsigned int cpu_mask_to_apicid_phys(const cpumask_t *cpumask) { /* As we are using single CPU as destination, pick only one CPU here */ - return cpu_physical_id(cpumask_first(cpumask)); + return cpu_physical_id(cpumask_any(cpumask)); } diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c index d4c9149..b2cab03 100644 --- a/xen/arch/x86/genapic/x2apic.c +++ b/xen/arch/x86/genapic/x2apic.c @@ -81,7 +81,7 @@ static const cpumask_t *vector_allocation_cpumask_x2apic_cluster(int cpu) static unsigned int cpu_mask_to_apicid_x2apic_cluster(const cpumask_t *cpumask) { - unsigned int cpu = cpumask_first(cpumask); + unsigned int cpu = cpumask_any(cpumask); unsigned int dest = per_cpu(cpu_2_logical_apicid, cpu); const cpumask_t *cluster_cpus = per_cpu(cluster_cpus, cpu); -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |