[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/mm: Add debug code to detect illegal page_lock and put_page_type ordering



On 24/01/18 15:43, Jan Beulich wrote:
>>>> On 24.01.18 at 16:02, <george.dunlap@xxxxxxxxxx> wrote:
>> The fix for XSA-242 depends on the same cpu never calling
>> _put_page_type() while holding a page_lock() for that page; doing so
>> may cause a deadlock under the right conditions.
>>
>> Furthermore, even before that, there was never any discipline for the
>> order in which page locks are grabbed; if there are any paths that
>> grab the locks for two different pages at once, we risk creating the
>> conditions for a deadlock to occur.
>>
>> These are believed to be safe, because it is believed that:
>> 1. No hypervisor paths ever lock two pages at once, and
>> 2. We never call _put_page_type() on a page while holding its page lock.
>>
>> Add a check to debug builds to catch any violations of these
>> assumpitons.
>>
>> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
>> ---
>> v2:
>> - Make wrapper macros to get rid of ugly #ifdefs
>> - Use "current_locked_page*" prefix
>> - Reword commit message
>>
>> NB this doesn't address Andrew's comment from v1 about adding "more
>> than just a debug check".  I think we should check in the ASSERT()
>> while we discuss future potential work, and not let the best become
>> the enemy of the good.
> I agree, and hence
> Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
> unless Andrew really means to veto this going in.

Not especially, but how about:

* Note: If we subsequently find a valid use-case for locking more than
one page at once, this safely logic will need adjustment.

in the comment describing the rational, to explicitly state we have no
particular problem with nested locking?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.