[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] bitmap_*() should cope with zero size bitmaps



>>> On 03.06.13 at 17:40, Tim Deegan <tim@xxxxxxx> wrote:
> At 14:33 +0100 on 03 Jun (1370270004), Jan Beulich wrote:
>> ... to match expectations set by memset()/memcpy().
>> 
>> Similarly for find_{first,next}_{,zero_}_bit() on x86.
>> 
>> __bitmap_shift_{left,right}() would also need fixing (they more
>> generally can't cope with the shift count being larger than the bitmap
>> size, and they perform undefined operations by possibly shifting an
>> unsigned long value by BITS_PER_LONG bits), but since these functions
>> aren't really used anywhere I wonder if we wouldn't better simply get
>> rid of them.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> 
>> --- a/xen/include/asm-x86/bitops.h
>> +++ b/xen/include/asm-x86/bitops.h
>> @@ -327,10 +327,7 @@ static inline unsigned int __scanbit(uns
>>   * Returns the bit-number of the first set bit, not the number of the byte
>>   * containing a bit.
>>   */
>> -#define find_first_bit(addr,size)                               \
>> -((__builtin_constant_p(size) && (size) <= BITS_PER_LONG ?       \
>> -  (__scanbit(*(const unsigned long *)addr, size)) :             \
>> -  __find_first_bit(addr,size)))
>> +#define find_first_bit(addr, size) find_next_bit(addr, size, 0)
>>  
>>  /**
>>   * find_next_bit - find the first set bit in a memory region
>> @@ -338,10 +335,24 @@ static inline unsigned int __scanbit(uns
>>   * @offset: The bitnumber to start searching at
>>   * @size: The maximum size to search
>>   */
>> -#define find_next_bit(addr,size,off)                                     \
>> -((__builtin_constant_p(size) && (size) <= BITS_PER_LONG ?                \
>> -  ((off) + (__scanbit((*(const unsigned long *)addr) >> (off), size))) : \
>> -  __find_next_bit(addr,size,off)))
>> +#define find_next_bit(addr, size, off) ({ \
>> +    unsigned int r__ = (size); \
>> +    unsigned int o__ = (off); \
>> +    switch ( -!__builtin_constant_p(size) | r__ ) \
> 
> Mmmm, binary trickery. :)  But doesn't leave us with the old behaviour
> in cases where 'size' is a non-compile-time-constant zero?

Intentionally (and not just for zero - the compile time constant zero
case was what was broken after all) - I could restore this, but I think
the real purpose of optimizing here ought to be when either of the
two execution branches can be entirely optimized out (which is only
for the compile time constant case).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.