[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] x86/mm-locks: apply a bias to lock levels for current domain


  • To: George Dunlap <george.dunlap@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 19 Dec 2018 14:07:01 +0000
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Wei Liu <wei.liu2@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Wed, 19 Dec 2018 14:08:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 19/12/2018 13:55, George Dunlap wrote:
> On 12/19/18 12:42 PM, Andrew Cooper wrote:
>> On 19/12/2018 12:38, George Dunlap wrote:
>>> On 12/19/18 12:10 PM, Roger Pau Monné wrote:
>>>> On Wed, Dec 19, 2018 at 11:40:14AM +0000, George Dunlap wrote:
>>>>> On 12/18/18 4:05 PM, Roger Pau Monne wrote:
>>>>>> paging_log_dirty_op function takes mm locks from a subject domain and
>>>>>> then attempts to perform copy to operations against the caller
>>>>>> domain in order to copy the result of the hypercall into the caller
>>>>>> provided buffer.
>>>>>>
>>>>>> This works fine when the caller is a non-paging domain, but triggers a
>>>>>> lock order panic when the caller is a paging domain due to the fact
>>>>>> that at the point where the copy to operation is performed the subject
>>>>>> domain paging lock is locked, and the copy operation requires locking
>>>>>> the caller p2m lock which has a lower level.
>>>>>>
>>>>>> Fix this limitation by adding a bias to the level of the caller domain
>>>>>> mm locks, so that the lower caller domain mm lock always has a level
>>>>>> greater than the higher subject domain lock level. This allows locking
>>>>>> the subject domain mm locks and then locking the caller domain mm
>>>>>> locks, while keeping the same lock ordering and the changes mostly
>>>>>> confined to mm-locks.h.
>>>>>>
>>>>>> Note that so far only this flow (locking a subject domain locks and
>>>>>> then the caller domain ones) has been identified, but not all possible
>>>>>> code paths have been inspected. Hence this solution attempts to be a
>>>>>> non-intrusive fix for the problem at hand, without discarding further
>>>>>> changes in the future if other valid code paths are found that require
>>>>>> more complex lock level ordering.
>>>>>>
>>>>>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>>>> As a quick fix I think this general approach is OK; the thing I don't
>>>>> like is that it's symmetric.  We don't *expect* to ever have a situation
>>>>> where A grabs one of its own MM locks and then one of B's, *and* B then
>>>>> grabs one of its own locks and then A's; but it could happen.
>>>> I have not identified such scenario ATM, but we cannot discard future
>>>> features needing such interlocking I guess. In any case, I think this
>>>> is something that would have to be solved when we came across such
>>>> scenario IMO.
>>> Right -- and the purpose of these macros is to make sure that we
>>> discover such potential deadlocks in testing rather than in production.
>>>
>>>>> Since we've generally identified dom0 which may be grabbing locks of a
>>>>> PVH stubdom, which may be grabbing logs of a normal domU, would it be
>>>>> possible / make sense instead to give a 2x bonus for dom0, and a 1x
>>>>> bonus for "is_priv_for" domains?
>>>> Jan pointed out such case, but I'm not sure I can see how this is
>>>> supposedly to happen even given the scenario above, I have to admit
>>>> however I'm not that familiar with the mm code, so it's likely I'm
>>>> missing something.
>>>>
>>>> Hypercalls AFAIK have a single target (or subject) domain, so even if
>>>> there's a stubdomain relation I'm not sure I see why that would
>>>> require this kind of locking, any domain can perform hypercalls
>>>> against a single subject domain, and the hypervisor itself doesn't
>>>> even know about stubdomain relations.
>>> We're considering three potential cases:
>>>
>>> A. dom0 makes a hypercall w/ domU as a target.
>>> B. dom0 makes a hypercall w/ stubdom as a target.
>>> c. stubdom makes a hypercall w/ domU as a target.
>> I'm afraid that this approach isn't appropriate.
>>
>> The privilege of the callee has no bearing on the correctness of the
>> locking.  Any logic based on IS_PRIV/target is buggy.  (Consider the
>> case where XSM lets an otherwise plain HVM domain use some of the more
>> interesting hypercalls.)
> You're not using the word "buggy" correctly.

"buggy" means that the logic is incorrectly, not that it manifests the
incorrectness in all cases.

> <snip>
>
> Yes, if someone uses XSM to bypass the IS_PRIV() functionality to give
> one domain access over another, then the lock checking will trigger.

Noone should be able to trigger assertions in the hypervisor by simply
editing the XSM policy.

This quite clearly demonstrates that the proposed logic isn't appropriate.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.