[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen master] xen: fix for_each_cpu when NR_CPUS=1



commit aa50f45332f17e8d6308b996d890d3e83748a1a5
Author:     Dario Faggioli <dfaggioli@xxxxxxxx>
AuthorDate: Fri Mar 12 17:02:47 2021 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Mar 12 17:02:47 2021 +0100

    xen: fix for_each_cpu when NR_CPUS=1
    
    When running an hypervisor build with NR_CPUS=1 for_each_cpu does not
    take into account whether the bit of the CPU is set or not in the
    provided mask.
    
    This means that whatever we have in the bodies of these loops is always
    done once, even if the mask was empty and it should never be done. This
    is clearly a bug and was in fact causing an assert to trigger in credit2
    code.
    
    Removing the special casing of NR_CPUS == 1 makes things work again.
    
    Reported-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Release-Acked-by: Ian Jackson <iwj@xxxxxxxxxxxxxx>
---
 xen/include/xen/cpumask.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/xen/include/xen/cpumask.h b/xen/include/xen/cpumask.h
index 256b60b106..e69589fc08 100644
--- a/xen/include/xen/cpumask.h
+++ b/xen/include/xen/cpumask.h
@@ -368,15 +368,10 @@ static inline void free_cpumask_var(cpumask_var_t mask)
 #define FREE_CPUMASK_VAR(m) free_cpumask_var(m)
 #endif
 
-#if NR_CPUS > 1
 #define for_each_cpu(cpu, mask)                        \
        for ((cpu) = cpumask_first(mask);       \
             (cpu) < nr_cpu_ids;                \
             (cpu) = cpumask_next(cpu, mask))
-#else /* NR_CPUS == 1 */
-#define for_each_cpu(cpu, mask)                        \
-       for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)(mask))
-#endif /* NR_CPUS */
 
 /*
  * The following particular system cpumasks and operations manage
--
generated by git-patchbot for /home/xen/git/xen.git#master



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.