|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/7] xen/bitops: Implement ffs() in common logic
On 13.03.2024 18:27, Andrew Cooper wrote:
> --- a/xen/arch/arm/include/asm/bitops.h
> +++ b/xen/arch/arm/include/asm/bitops.h
> @@ -157,7 +157,7 @@ static inline int fls(unsigned int x)
> }
>
>
> -#define ffs(x) ({ unsigned int __t = (x); fls(ISOLATE_LSB(__t)); })
> +#define arch_ffs(x) ({ unsigned int __t = (x); fls(ISOLATE_LSB(__t)); })
The way the macro is invoked, I don't think the helper local variable
is then needed anymore?
> --- a/xen/arch/x86/include/asm/bitops.h
> +++ b/xen/arch/x86/include/asm/bitops.h
> @@ -430,16 +430,23 @@ static inline int ffsl(unsigned long x)
> return (int)r+1;
> }
>
> -static inline int ffs(unsigned int x)
> +static inline unsigned int arch_ffs(unsigned int x)
> {
> - int r;
> + int r = -1;
> +
> + /*
> + * The AMD manual states that BSF won't modify the destination register
> if
> + * x=0. The Intel manual states that the result is undefined, but the
> + * architects have said that the register is written back with it's old
> + * value, possibly zero extended above 32 bits.
> + */
> + asm ( "bsf %[val], %[res]"
> + : [res] "+r" (r)
> + : [val] "rm" (x) );
And this isn't what the compiler would be doing anyway?
Also, just to mention it: I take it that you/we are sure that disallowing
both operands to be the same register is still better than ...
> - asm ( "bsf %1,%0\n\t"
> - "jnz 1f\n\t"
> - "mov $-1,%0\n"
> - "1:" : "=r" (r) : "rm" (x));
... the original form?
> --- a/xen/common/bitops.c
> +++ b/xen/common/bitops.c
> @@ -34,8 +34,18 @@
> RUNTIME_CHECK(fn, val, res); \
> } while ( 0 )
>
> +static void test_ffs(void)
Nit: __init please, even if there ought to be no reason for the compiler
to not inline this function.
> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -110,6 +110,21 @@ static inline int generic_flsl(unsigned long x)
>
> #include <asm/bitops.h>
>
> +/*
> + * Find First Set bit. Bits are labelled from 1.
> + */
> +static always_inline __pure unsigned int ffs(unsigned int x)
Why always_inline?
> +{
> + if ( __builtin_constant_p(x) )
> + return __builtin_ffs(x);
> +
> +#ifndef arch_ffs
> +#define arch_ffs __builtin_ffs
> +#endif
> +
> + return arch_ffs(x);
> +}
Just to mention it: __builtin_ffs() takes and returns plain int. I'm
happy about our own helper being unsigned-correct, but anything like
this has a Misra angle too.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |