[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/9] xen/ppc: Implement atomic.h



On 8/7/23 11:13 AM, Jan Beulich wrote:
> On 03.08.2023 01:02, Shawn Anastasio wrote:
>> Implement atomic.h for PPC, based off of the original Xen 3.2
>> implementation.
> 
> Since likely that originally came from Linux, did you cross check that
> Linux hasn't gained any bug fixes in the meantime?

I did -- the atomic barrier instructions used by linux have changed
since this code was originally written, so I've updated them to be
inline with modern linux.

> Other than this just a couple of nits; I'm not really qualified to
> review in particular the inline assembly here, I'm afraid.
> 
>> --- /dev/null
>> +++ b/xen/arch/ppc/include/asm/atomic.h
>> @@ -0,0 +1,387 @@
>> +/* SPDX-License-Identifier: GPL-2.0-or-later */
>> +/*
>> + * PowerPC64 atomic operations
>> + *
>> + * Copyright (C) 2001 Paul Mackerras <paulus@xxxxxxxxxx>, IBM
>> + * Copyright (C) 2001 Anton Blanchard <anton@xxxxxxxxxx>, IBM
>> + * Copyright Raptor Engineering LLC
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version
>> + * 2 of the License, or (at your option) any later version.
>> + */
>> +
>> +#ifndef _ASM_PPC64_ATOMIC_H_
>> +#define _ASM_PPC64_ATOMIC_H_
> 
> To fit the name, no "64" please.

Will fix.

>> +#include <xen/atomic.h>
>> +
>> +#include <asm/memory.h>
>> +#include <asm/system.h>
>> +
>> +static inline int atomic_read(const atomic_t *v)
>> +{
>> +    return *(volatile int *)&v->counter;
>> +}
>> +
>> +static inline int _atomic_read(atomic_t v)
>> +{
>> +    return v.counter;
>> +}
>> +
>> +static inline void atomic_set(atomic_t *v, int i)
>> +{
>> +    v->counter = i;
>> +}
>> +
>> +static inline void _atomic_set(atomic_t *v, int i)
>> +{
>> +    v->counter = i;
>> +}
>> +
>> +void __bad_atomic_read(const volatile void *p, void *res);
>> +void __bad_atomic_size(void);
>> +
>> +#define build_atomic_read(name, insn, type)                                 
>>    \
>> +    static inline type name(const volatile type *addr)                      
>>    \
>> +    {                                                                       
>>    \
>> +        type ret;                                                           
>>    \
>> +        asm volatile ( insn "%U1%X1 %0,%1" : "=r"(ret) : "m<>"(*addr) );    
>>    \
> 
> As I think I had mentioned before, asm() contraints want a blank between
> closing quote and opend paren. I.e. like this
> 
>         asm volatile ( insn "%U1%X1 %0,%1" : "=r" (ret) : "m<>" (*addr) );
> 

My mistake, I went through and hand-formatted all of this code to try to
be inline with Xen's style but forgot about the constraints.

As an aside, I don't suppose there is an automatic formatter somewhere
that I've missed? I found an old clang-format fork that claims to add
support for Xen's formatting[1] but it seems to only handle a subset of
Xen's rules so I haven't found it very useful.

[1] https://github.com/NastyaVicodin/llvm-project/commits/main

>> +#define read_atomic(p)                                                      
>>    \
>> +    ({                                                                      
>>    \
>> +        union {                                                             
>>    \
>> +            typeof(*(p)) val;                                               
>>    \
>> +            char c[0];                                                      
>>    \
>> +        } x_;                                                               
>>    \
>> +        read_atomic_size(p, x_.c, sizeof(*(p)));                            
>>    \
>> +        x_.val;                                                             
>>    \
>> +    })
>> +
>> +#define write_atomic(p, x)                                                  
>>    \
>> +    do                                                                      
>>    \
>> +    {                                                                       
>>    \
>> +        typeof(*(p)) x_ = (x);                                              
>>    \
>> +        write_atomic_size(p, &x_, sizeof(*(p)));                            
>>    \
>> +    } while ( 0 )
> 
> Up to here you use underscore-suffixed locals, but then ...
> 
>> +#define add_sized(p, x)                                                     
>>    \
>> +    ({                                                                      
>>    \
>> +        typeof(*(p)) __x = (x);                                             
>>    \
> 
> ... you have even two prefixing underscores here.
> 

The definitions of these macros were directly copied from elsewhere in
Xen (x86 and arm). I can change them all to use underscore-suffixed
local naming here, though.

>> +        switch ( sizeof(*(p)) )                                             
>>    \
>> +        {                                                                   
>>    \
>> +        case 1:                                                             
>>    \
>> +            add_u8_sized((uint8_t *) (p), __x);                             
>>    \
>> +            break;                                                          
>>    \
>> +        case 2:                                                             
>>    \
>> +            add_u16_sized((uint16_t *) (p), __x);                           
>>    \
>> +            break;                                                          
>>    \
>> +        case 4:                                                             
>>    \
>> +            add_u32_sized((uint32_t *) (p), __x);                           
>>    \
>> +            break;                                                          
>>    \
>> +        default:                                                            
>>    \
>> +            __bad_atomic_size();                                            
>>    \
>> +            break;                                                          
>>    \
>> +        }                                                                   
>>    \
>> +    })
>> +
>> +static inline void atomic_add(int a, atomic_t *v)
>> +{
>> +    int t;
>> +
>> +    asm volatile ( "1: lwarx %0,0,%3\n"
>> +                   "add %0,%2,%0\n"
>> +                   "stwcx. %0,0,%3\n"
>> +                   "bne- 1b"
>> +                   : "=&r"(t), "=m"(v->counter)
>> +                   : "r"(a), "r"(&v->counter), "m"(v->counter) : "cc" );
> 
> "+m" and then drop the last input?
> 

Yes, that makes sense. Not sure why it was originally written that way
but I'll change it.

>> +static inline int atomic_dec_if_positive(atomic_t *v)
>> +{
>> +    int t;
>> +
>> +    asm volatile(PPC_ATOMIC_ENTRY_BARRIER
>> +                 "1:    lwarx    %0,0,%1        # atomic_dec_if_positive\n"
>> +                 "addic.    %0,%0,-1\n"
>> +                 "blt-    2f\n"
>> +                 "stwcx.    %0,0,%1\n"
>> +                 "bne-    1b\n"
>> +                 PPC_ATOMIC_EXIT_BARRIER
>> +                 "2:" : "=&r"(t)
>> +                 : "r"(&v->counter) : "cc", "memory");
> 
> Missing blanks near the parentheses. Would also be nice if the padding
> blanks actually vertically aligned the operands.

I tried to make the spacing uniform across all asm blocks in this file,
but the convention I used was just a single space between the mnemonic
and the operands. I seemed to have missed this one though, so I'll bring
it in line with the others.

>> +    return t;
>> +}
>> +
>> +static inline atomic_t atomic_compareandswap(atomic_t old, atomic_t new,
>> +                                             atomic_t *v)
>> +{
>> +    atomic_t rc;
>> +    rc.counter = __cmpxchg(&v->counter, old.counter, new.counter, 
>> sizeof(int));
>> +    return rc;
>> +}
>> +
>> +#define arch_cmpxchg(ptr, o, n)                                             
>>    \
>> +    ({                                                                      
>>    \
>> +        __typeof__(*(ptr)) _o_ = (o);                                       
>>    \
>> +        __typeof__(*(ptr)) _n_ = (n);                                       
>>    \
> 
> Problematic leading underscores again.
> 

Will fix.

>> +        (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long) _o_,          
>>    \
>> +                                       (unsigned long) _n_, 
>> sizeof(*(ptr)));   \
>> +    })
>> +
>> +static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
>> +{
>> +    return arch_cmpxchg(&v->counter, old, new);
>> +}
>> +
>> +#define ATOMIC_OP(op, insn, suffix, sign) \
>> +    static inline void atomic_##op(int a, atomic_t *v)                      
>>      \
>> +    {                                                                       
>>      \
>> +        int t;                                                              
>>      \
>> +        asm volatile ( "1: lwarx %0,0,%3\n"                                 
>>      \
>> +                       insn "%I2" suffix " %0,%0,%2\n"                      
>>    \
>> +                       "stwcx. %0,0,%3 \n"                                  
>>      \
>> +                       "bne- 1b\n"                                          
>>      \
>> +                       : "=&r"(t), "+m"(v->counter)                         
>>      \
>> +                       : "r" #sign(a), "r"(&v->counter)                     
>>      \
>> +                       : "cc" );                                            
>>      \
>> +    }
>> +
>> +ATOMIC_OP(and, "and", ".", K)
>> +
>> +static inline int atomic_sub_and_test(int i, atomic_t *v)
>> +{
>> +    return atomic_sub_return(i, v) == 0;
>> +}
>> +
>> +static inline int atomic_inc_and_test(atomic_t *v)
>> +{
>> +    return atomic_add_return(1, v) == 0;
>> +}
>> +
>> +static inline int atomic_dec_and_test(atomic_t *v)
>> +{
>> +    return atomic_sub_return(1, v) == 0;
>> +}
>> +
>> +static inline int atomic_add_negative(int i, atomic_t *v)
>> +{
>> +    return atomic_add_return(i, v) < 0;
>> +}
>> +
>> +static inline int __atomic_add_unless(atomic_t *v, int a, int u)
>> +{
>> +    int c, old;
>> +
>> +    c = atomic_read(v);
>> +    while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c)
> 
> Btw, no real need to parenthesize v in cases like this one. Otoh a needs
> parenthesizing.

FWIW this was copied directly from arm64/atomic.h, but seeing as its a
normal function and not a macro I'm not sure I see why `a` would need
parenthesis. I'll drop the ones around `v` though.

> Jan

Thanks,
Shawn



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.