[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 10/34] xen/riscv: introduce bitops.h
On Tue, 2024-01-16 at 14:24 +0100, Jan Beulich wrote: > On 16.01.2024 14:06, Oleksii wrote: > > On Mon, 2024-01-15 at 17:44 +0100, Jan Beulich wrote: > > > On 22.12.2023 16:12, Oleksii Kurochko wrote: > > > > +#define test_and_set_bit __test_and_set_bit > > > > +#define test_and_clear_bit __test_and_clear_bit > > > > > > I realize test-and-change have no present users, despite being > > > made > > > available by Arm and x86, but I think they would better be > > > provided > > > right away, rather than someone introducing a use then needing to > > > fiddle with RISC-V (and apparently also PPC) code. > > Sure, it makes sense. I'll add test-and-change too. > > > > > I'm also puzzled by this aliasing: Aren't there cheaper non- > > > atomic > > > insn forms that could be used for the double-underscore-prefixed > > > variants? > > It will be cheaper, but I assume that this API should be safe in > > the > > case of SMP where different CPUs can access the same variable or > > similar cases with simultaneous access to the variable. > > Of course, that's what test_and_...() are for. __test_and_...() are > for cases where there's no concurrency, when hence the cheaper forms > can be used. Thus my asking about the aliasing done above. Then it makes sense to update __test_and...() to use non-atomic insn. I'll do that in the next patch version. Thanks for explanation. > > > > > +#if BITS_PER_LONG == 64 > > > > + if ((word & 0xffffffff) == 0) { > > > > + num += 32; > > > > + word >>= 32; > > > > + } > > > > > > You're ending up with neither Xen nor Linux style this way. May I > > > suggest to settle on either? > > > > Will it fine to rework header from Linux to Xen style? Does it make > > sense? > > I think this file can be reworked to Xen style as I don't expect > > that > > it will be changed since it will be merged. > > You may keep Linux style or fully switch to Xen style - which one is > largely up to you. All I'm asking is to avoid introducing further > mixed-style source files. I'll be consistent in code style. > > > > > --- /dev/null > > > > +++ b/xen/include/asm-generic/bitops/bitops-bits.h > > > > @@ -0,0 +1,10 @@ > > > > +/* SPDX-License-Identifier: GPL-2.0 */ > > > > +#ifndef _ASM_GENERIC_BITOPS_BITS_H_ > > > > +#define _ASM_GENERIC_BITOPS_BITS_H_ > > > > + > > > > +#define BITOP_BITS_PER_WORD 32 > > > > +#define BITOP_MASK(nr) (1UL << ((nr) % > > > > BITOP_BITS_PER_WORD)) > > > > > > Why 1UL and not just 1U, when bits per word is 32? > > There is no specific reason, should 1U. ( I originally used > > BITOPS_BITS_PER_LONG ) and with introduction of asm-generic bitops > > decided to follow what other archs provide. > > > > Regarding to the second part of the question, I don't understand it > > fully. Considering BITOP_BIT_PER_WORD definition for other archs ( > > ARM > > and PPC ) it is expected that word is 32 bits. > > The 2nd part was explaining why I'm asking. It wasn't another > question. > > > > > --- /dev/null > > > > +++ b/xen/include/asm-generic/bitops/test-bit.h > > > > @@ -0,0 +1,16 @@ > > > > +/* SPDX-License-Identifier: GPL-2.0 */ > > > > +#ifndef _ASM_GENERIC_BITOPS_TESTBIT_H_ > > > > +#define _ASM_GENERIC_BITOPS_TESTBIT_H_ > > > > + > > > > +/** > > > > + * test_bit - Determine whether a bit is set > > > > + * @nr: bit number to test > > > > + * @addr: Address to start counting from > > > > + */ > > > > +static inline int test_bit(int nr, const volatile void *addr) > > > > +{ > > > > + const volatile unsigned int *p = addr; > > > > > > With BITOP_BITS_PER_WORD I think you really mean uint32_t here. > > Isn't it the same: 'unsigned int' and 'uint32_t'? > > No, or else there wouldn't have been a need to introduce uint<N>_t > (and > others) in C99. It just so happens that right now all architectures > Xen > can be built for have sizeof(int) == 4 and CHAR_BITS == 8. In an > arch- > specific header I would see this as less of an issue, but in a > generic > header we'd better avoid encoding wrong assumptions. The one > assumption > we generally make is that sizeof(int) >= 4 and CHAR_BITS >= 8 (albeit > I > bet really in various places we assume CHAR_BITS == 8). In this case we have to switch to uint<N>_t. Thanks for the explanation. I'll update this part of code in the next patch version. > > > > Also you want to make sure asm-generic/bitops/bitops-bits.h is > > > really in use here, or else an arch overriding / not using that > > > header may end up screwed. > > I am not really understand what do you mean. Could you please > > explain a > > little bit more. > > Whichever type you use here, it needs to be in sync with > BITOP_BITS_PER_WORD. Hence you want to include the _local_ bitops- > bits.h > here, such that in case of an inconsistent override by an arch the > compiler would complain about the two differring #define-s. (IOW an > arch overriding BITOP_BITS_PER_WORD cannot re-use this header as-is.) > > The same may, btw, be true for others of the new headers you add - > the > same #include would therefore be needed there as well. Now it clear to me. It seems like BITOP_BITS_PER_WORD, BITOP_MASK, BITOP_WORD, and BITS_PER_BYTE are defined in {arm, ppc, riscv}/include/asm/bitops.h. I expected that any architecture planning to use asm- generic/bitops/bitops-bits.h would include it at the beginning of <arch>/include/asm/bitops.h, similar to what is done for RISC-V: #ifndef _ASM_RISCV_BITOPS_H #define _ASM_RISCV_BITOPS_H #include <asm/system.h> #include <asm-generic/bitops/bitops-bits.h> ... But in this case, to allow architecture overrides macros, it is necessary to update asm-generic/bitops/bitops-bits.h: #ifndef BITOP_BITS_PER_WORD #define BITOP_BITS_PER_WORD 32 #endif ... Therefore, if an architecture needs to override something, it will add #define ... before #include <asm-generic/bitops/bitops-bits.h>. Does it make sense? ~ Oleksii
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |