[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 3/5] xen/bitops: Implement hweight32() in terms of hweightl()



On Wed, 4 Sep 2024, Andrew Cooper wrote:
> ... and drop generic_hweight32().
> 
> As noted previously, the only two users of hweight32() are in __init paths.
> 
> The int-optimised form of generic_hweight() is only two instructions shorter
> than the long-optimised form, and even then only on architectures which lack
> fast multiplication, so there's no point providing an int-optimised form.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

The patch is OK:

Acked-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>


I was looking at docs/misra/C-language-toolchain.rst to make sure
everything is listed there. We have attr_const as "__const__" noted
among "Non-standard tokens".

Looks like we need to add __always_inline__ ?


> ---
> CC: Jan Beulich <JBeulich@xxxxxxxx>
> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> CC: Julien Grall <julien@xxxxxxx>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
> CC: Bertrand Marquis <bertrand.marquis@xxxxxxx>
> CC: Michal Orzel <michal.orzel@xxxxxxx>
> CC: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
> CC: Shawn Anastasio <sanastasio@xxxxxxxxxxxxxxxxxxxxx>
> 
> v2:
>  * Reorder with respect to the hweight64() patch
>  * Rerwrite the commit message
>  * s/__pure/attr_const/
> ---
>  xen/arch/arm/include/asm/bitops.h | 1 -
>  xen/arch/ppc/include/asm/bitops.h | 1 -
>  xen/arch/x86/include/asm/bitops.h | 1 -
>  xen/include/xen/bitops.h          | 5 +++++
>  4 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/bitops.h 
> b/xen/arch/arm/include/asm/bitops.h
> index 91cd167b6bbb..b28c25b3d52d 100644
> --- a/xen/arch/arm/include/asm/bitops.h
> +++ b/xen/arch/arm/include/asm/bitops.h
> @@ -85,7 +85,6 @@ bool clear_mask16_timeout(uint16_t mask, volatile void *p,
>   * The Hamming Weight of a number is the total number of bits set in it.
>   */
>  #define hweight64(x) generic_hweight64(x)
> -#define hweight32(x) generic_hweight32(x)
>  
>  #endif /* _ARM_BITOPS_H */
>  /*
> diff --git a/xen/arch/ppc/include/asm/bitops.h 
> b/xen/arch/ppc/include/asm/bitops.h
> index 64512e949530..f488a7c03425 100644
> --- a/xen/arch/ppc/include/asm/bitops.h
> +++ b/xen/arch/ppc/include/asm/bitops.h
> @@ -133,6 +133,5 @@ static inline int test_and_set_bit(unsigned int nr, 
> volatile void *addr)
>   * The Hamming Weight of a number is the total number of bits set in it.
>   */
>  #define hweight64(x) __builtin_popcountll(x)
> -#define hweight32(x) __builtin_popcount(x)
>  
>  #endif /* _ASM_PPC_BITOPS_H */
> diff --git a/xen/arch/x86/include/asm/bitops.h 
> b/xen/arch/x86/include/asm/bitops.h
> index 4c5b21907a64..507b043b8a86 100644
> --- a/xen/arch/x86/include/asm/bitops.h
> +++ b/xen/arch/x86/include/asm/bitops.h
> @@ -482,6 +482,5 @@ static always_inline unsigned int arch_flsl(unsigned long 
> x)
>   * The Hamming Weight of a number is the total number of bits set in it.
>   */
>  #define hweight64(x) generic_hweight64(x)
> -#define hweight32(x) generic_hweight32(x)
>  
>  #endif /* _X86_BITOPS_H */
> diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
> index 58c600155f7e..a462c3065158 100644
> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -326,6 +326,11 @@ static always_inline attr_const unsigned int 
> hweightl(unsigned long x)
>  #endif
>  }
>  
> +static always_inline attr_const unsigned int hweight32(uint32_t x)
> +{
> +    return hweightl(x);
> +}
> +
>  /* --------------------- Please tidy below here --------------------- */
>  
>  #ifndef find_next_bit
> -- 
> 2.39.2
> 

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.