[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones
On 14.12.22 11:39, Jan Beulich wrote: On 10.09.2022 17:49, Juergen Gross wrote:--- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d)/* After this barrier no new PoD activities can happen. */BUG_ON(!d->is_dying); - spin_barrier(&p2m->pod.lock.lock); + spin_barrier(&p2m->pod.lock.lock.lock);This is getting unwieldy as well, and ...@@ -160,21 +165,30 @@ typedef union {typedef struct spinlock {spinlock_tickets_t tickets; - u16 recurse_cpu:SPINLOCK_CPU_BITS; -#define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) -#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) - u16 recurse_cnt:SPINLOCK_RECURSE_BITS; -#define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) union lock_debug debug; #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile *profile; #endif } spinlock_t;+struct spinlock_recursive {+ struct spinlock lock; + u16 recurse_cpu:SPINLOCK_CPU_BITS; +#define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) +#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) + u16 recurse_cnt:SPINLOCK_RECURSE_BITS; +#define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) +};... I'm not sure anyway it's a good idea to embed spinlock_t inside the new struct. I'd prefer if non-optional fields were always at the same position, and there's not going to be that much duplication if we went with typedef struct spinlock { spinlock_tickets_t tickets; union lock_debug debug; #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile *profile; #endif } spinlock_t; typedef struct rspinlock { spinlock_tickets_t tickets; u16 recurse_cpu:SPINLOCK_CPU_BITS; #define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) #define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) u16 recurse_cnt:SPINLOCK_RECURSE_BITS; #define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) union lock_debug debug; #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile *profile; #endif } rspinlock_t; This would also eliminate the size increase of recursive locks in debug builds. And functions like spin_barrier() then also would (have to) properly indicate what kind of lock they act on. You are aware that this would require to duplicate all the spinlock functions for the recursive spinlocks? I'm not strictly against this, but before going this route I think I should mention the implications. Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |