[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/2][4.15?] x86: fix build when NR_CPUS == 1
On Mon, Mar 01, 2021 at 04:14:26PM +0100, Jan Beulich wrote: > On 01.03.2021 15:01, Roger Pau Monné wrote: > > On Mon, Mar 01, 2021 at 09:31:07AM +0100, Jan Beulich wrote: > >> In this case the compiler is recognizing that no valid array indexes > >> remain (in x2apic_cluster()'s access to per_cpu(cpu_2_logical_apicid, > >> ...)), but oddly enough isn't really consistent about the checking it > >> does (see the code comment). > > > > I assume this is because of the underlying per_cpu access to > > __per_cpu_offset using cpu as the index, in which case wouldn't it be > > better to place the BUG_ON there? > > Not sure, to be honest. It seemed more logical to me to place it > next to where the issue is. I'm not sure whether I would prefer to place it in per_cpu directly to avoid similar issues popping up in other parts of the code in the future? Maybe that's not likely. TBH it seems kind of random to be placing this BUG_ON conditionals to make the compilers happy, but maybe there's no other option. > > Also I wonder why the accesses the same function does to the per_cpu > > area before the modified chunk using this_cpu as index don't also > > trigger such warnings. > > The compiler appears to be issuing the warning when it can prove > that no legitimate index can make it to a respective use. in this > case it means that is is > > if ( this_cpu == cpu ) > continue; > > which makes it possible for the compiler to know that what gets > past this would be an out of bounds access, since for NR_CPUS=1 > both this_cpu and cpu can only validly both be zero. (This also > plays into my choice of placement, because it is not > x2apic_cluster() on its own which has this issue.) I see, thanks for the explanation. It makes me wonder whether other random issues like this will pop up in other parts of the code. We should have a gitlab build with NR_CPUS=1 in order to assert we don't regress it. Speaking for myself I certainly won't be able to detect whether something broke this support in the future. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |