[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/5] xen/spinlocks: in debug builds store cpu holding the lock



On 12.09.2019 15:42, Juergen Gross wrote:
> On 12.09.19 15:36, Jan Beulich wrote:
>> On 12.09.2019 15:28, Juergen Gross wrote:
>>> @@ -267,6 +288,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
>>>   
>>>       /* Don't allow overflow of recurse_cpu field. */
>>>       BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
>>> +    BUILD_BUG_ON(SPINLOCK_RECURSE_BITS <= 0);
>>
>> This is too weak: While I don't think we strictly need 15 levels of
>> recursion, I also don't think we'll get away with just 1. I think
>> this minimally needs to be "<= 1", perhaps better "<= 2". Other
>> thoughts (also by others) on the precise value to use here
>> appreciated. With this suitably taken care of (which can happen
>> while committing, but must not be forgotten)
>> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> "2" should be no problem, as the other added
> 
> BUILD_BUG_ON(LOCK_DEBUG_PAD_BITS·<=·0);
> 
> is implying that already.

That's not the point though - after your change has gone in,
the two bitfields may change independently. The question is what
recursion depth we think we minimally need to run the code as it
is right now. For example, I'm not sure how much nesting we need
for the PCI devices lock right now. (For others I think there's
not going to be deeper than two or three levels of nesting.)

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.