[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.12 V4] x86/altp2m: fix HVMOP_altp2m_set_domain_state race




> On Feb 15, 2019, at 2:02 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> 
>>>> On 15.02.19 at 14:51, <George.Dunlap@xxxxxxxxxx> wrote:
> 
>> 
>>> On Feb 15, 2019, at 1:47 PM, Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx> 
>>> wrote:
>>> 
>>> On 15/02/2019 13:37, George Dunlap wrote:
>>>> 
>>>>>> The one issue is that domain_pause_except_self() currently is actually a 
>>>>>> deadlock risk if two different vcpus start it at the same time.  I think 
>>>>>> the 
>> 
>>>>>> attached patch (compile-tested only) should fix this issue; after this 
>>>>>> patch 
>> 
>>>>>> you should be able to use domain_pause_except_self() in 
>>>>>> altp2m_set_domain_state instead.
>>>>> There's one thing I don't really like here, which is a result of the
>>>>> (necessary) re-use of the hypercall deadlock mutex: This
>>>>> certainly poses the risk of getting called from a context where
>>>>> the lock was already acquired. Therefore I'd like to suggest to
>>>>> use this lock in a recursive way (here and elsewhere).
>>> 
>>> I can't think of a usecase were we would want to tolerate recursion on
>>> the hypercall deadlock spinlock.
>> 
>> It sounds like Jan is specifically thinking that someone may (say) call 
>> domctl_lock(), then afterwards call domain_pause_except_self().
>> 
>> Of course, that would deadlock immediately, so would probably get caught 
>> before the patch even got to `git send-email`.
> 
> Indeed. The situation I'm worried about is if this was put on some
> error path, which might never be hit in testing.

Hmm, or other unusual paths.  I’m leaning more towards saying we should use 
spin_lock_recursive(); it’s not *that* much more complication, is it, Andy?

 -George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.