[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] x86/mm-locks: apply a bias to lock levels for current domain



>>> On 20.12.18 at 10:05, <roger.pau@xxxxxxxxxx> wrote:
> On Wed, Dec 19, 2018 at 06:10:04PM +0000, George Dunlap wrote:
>> On 12/19/18 2:59 PM, Roger Pau Monné wrote:
>> >> Using 'current' means that potential deadlocks which would now cause a
>> >> BUG() won't anymore.  I'm fine with not adding extra protections that
>> >> aren't there now; but I don't want to remove protections that are.
>> > 
>> > The lock ordering enforcement is still kept as-is, but Xen is allowed
>> > to lock the caller mm locks in the right order (be it privileged or
>> > not) after having locked a subject domain ones also in the correct
>> > order.
>> 
>> Suppose we have mm_lock_x, with value 8, and mm_lock_y with value 16.
>> 
>> Suppose we have H1, which grabs current->mm_lock_x and tgt->mm_lock_y.
> 
> With my proposal H1 won't be a valid lock sequence, since
> current->mm_lock_x level > tgt->mm_lock_y level.
> 
> With your proposal H1 would be valid depending on whether 'current' is
> Dom0 or not. If current == Dom0, H1 is invalid, since
> current->mm_lock_x level > tgt->mm_lock_y level. If current != Dom0
> then the H1 sequence would be valid.
> 
> I think this is not ideal, since the correctness of the H1 sequence
> with your proposal depends on whether the caller is Dom0 or an
> unprivileged domain. With my proposal there's no such difference, and
> the validity of sequences doesn't depend on whether the caller is
> privileged or not, which makes it easier to reason about.
> 
>> And suppose we have H2, which grabs current->mm_lock_y, and tgt->mm_lock_x.
> 
> H2 won't be a valid locking sequence with my proposal, since
> current->mm_lock_y level > tgt->mm_lock_x level, so the H2 sequence
> would be invalid and always trigger the BUG.
> 
> With my proposal all current domain mm locks always have a level
> higher than any target mm locks.
> 
>> 
>> And suppose domA calls H1 on domB at the same time that domB calls H2 on
>> domA.  We could have the following sequence:
>> 
>> * H1: grab A->mm_lock_x
>> * H2: grab B->mm_lock_y
>> * H1: wait on B->mm_lock_y
>> * H2: wait on A->mm_lock_x #DEADLOCK
>> 
>> With the current mm lock checking, H2 would cause a BUG() the first time
>> it was called.
>> 
>> With my "special privilege only" boosting you'd also get a BUG() in at
>> least one of the invocations.
>> 
>> With your "current bias" patch, no BUG() would be encountered; we'd only
>> discover the deadlock once a live server had actually deadlocked.
>> 
>> This is what I'm talking about -- with "boost dom0", you have a global
>> order to the locks.  It goes:
>> 
>> 1-8: All domU locks (in the order listed in mm-locks.h)
>> 9-16: All dom0 locks
>> 
>> Thus we know for certain that there can't be a caller / callee deadlock
>> that's not detected.  With your patch, there isn't a global order: the
>> order changes based on who made the hypercall, so it's more difficult to
>> reason about whether there's a deadlock or not.
>> 
>> So, do H1 and H2 exist right now?  I think probably not, but I can't
>> immediately say.  Will such a pair *never* exist?  I don't think I can
>> guarantee that either.  That's why I want something to check.
> 
> I'm fine with this proposal, it's just that if it's safe for Dom0 to
> pick any other domain mm lock and then any Dom0 mm lock it should also
> be safe for any domain and not Dom0 only.

Since I'm not sure whether this was implied here: "Any" is of
course wrong here, or else there'd be ABBA deadlock potential
between such two arbitrary domains. I suppose you still mean
"any domain controlling another domain", and for such a domain
to only acquire locks of the domain it controls.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.