[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 12/12] xen/spinlock: support higher number of cpus
On 29.02.2024 17:45, Juergen Gross wrote: > On 29.02.24 17:31, Jan Beulich wrote: >> On 29.02.2024 17:29, Jürgen Groß wrote: >>> On 29.02.24 16:46, Jan Beulich wrote: >>>> On 12.12.2023 10:47, Juergen Gross wrote: >>>>> Allow 16 bits per cpu number, which is the limit imposed by >>>>> spinlock_tickets_t. >>>>> >>>>> This will allow up to 65535 cpus, while increasing only the size of >>>>> recursive spinlocks in debug builds from 8 to 12 bytes. >>>> >>>> I think we want to be more conservative here, for the case of there >>>> being bugs: The CPU holding a lock may wrongly try to acquire it a >>>> 2nd time. That's the 65536th ticket then, wrapping the value. >>> >>> Is this really a problem? There will be no other cpu left seeing the lock >>> as "free" in this case, as all others will be waiting for the head to reach >>> their private tail value. >> >> But isn't said CPU then going to make progress, rather than indefinitely >> spinning on the lock? > > No, I don't think so. Hmm. If CPU0 takes a pristine lock, it'll get ticket 0x0000. All other CPUs will get tickets 0x0001 ... 0xffff. Then CPU0 trying to take the lock again will get ticket 0x0000 again, which equals what .head still has (no unlocks so far), thus signalling the lock to be free when it isn't. > The limit isn't 65535 because of the ticket mechanism, but because of the > rspin mechanism, where we need a "no cpu is owning the lock" value. Without > the recursive locks the limit would be 65536 (or 4096 today). I think this rather belongs ... >>>> Therefore my suggestion would be to only (mention) go(ing) up to 32k. >>>> >>>>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx> >>>>> --- >>>>> xen/common/spinlock.c | 1 + >>>>> xen/include/xen/spinlock.h | 18 +++++++++--------- >>>>> 2 files changed, 10 insertions(+), 9 deletions(-) >>>> >>>> Shouldn't this also bump the upper bound of the NR_CPUS range then >>>> in xen/arch/Kconfig? >>> >>> Fine with me, I can add another patch to the series doing that. >> >> Why not do it right here? The upper bound there is like it is only >> because of the restriction that's lifted here. ... here (for having nothing to do with the supposed lack of hanging that I'm seeing)? > I'd prefer splitting the two instances, but if you prefer it to be in a > single patch, so be it. I'm not going to insist - if want to do it separately, please do. Perhaps others would actually prefer it that way ... Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |