[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] xen: drop in_atomic()


  • To: Jan Beulich <JBeulich@xxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 24 May 2019 13:30:56 +0100
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 24 May 2019 12:31:23 +0000
  • Ironport-sdr: 0E4D3yCe61GvB6C0SG3ZYPUIzu4qIJD/QiTaD8F4E3XM4OjNOiC22bKincsSNvc5Y1wFGZNh9L K+zvNkLENOWiNd1vCcwPPUxY+LQlmw75I8jY39tTdQ75viSrkwZgeA0TipePxB8omBeBjFBUcj tvWLXts9SnaCfjZvlOZK4CWaGe465Ayd+khYbodVoGAbU0IlVXPZwLZWG1WP8VTJGirqJMmlRX faaOCsaSawIF0eAj6pZNu0ONii9OK4wtotDWtgPs8eIccIZ/Ujeclp5zgjUYYRztNMB9cqNrUk bfc=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 24/05/2019 09:39, Jan Beulich wrote:
>>>> On 24.05.19 at 10:34, <jgross@xxxxxxxx> wrote:
>> On 24/05/2019 08:38, Jan Beulich wrote:
>>>>>> On 24.05.19 at 07:41, <jgross@xxxxxxxx> wrote:
>>>> On 22/05/2019 12:10, Jan Beulich wrote:
>>>>>>>> On 22.05.19 at 11:45, <jgross@xxxxxxxx> wrote:
>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>> @@ -3185,22 +3185,6 @@ static enum hvm_translation_result __hvm_copy(
>>>>>>  
>>>>>>      ASSERT(is_hvm_vcpu(v));
>>>>>>  
>>>>>> -    /*
>>>>>> -     * XXX Disable for 4.1.0: PV-on-HVM drivers will do grant-table ops
>>>>>> -     * such as query_size. Grant-table code currently does 
>>>>>> copy_to/from_guest
>>>>>> -     * accesses under the big per-domain lock, which this test would 
>>>>>> disallow.
>>>>>> -     * The test is not needed until we implement sleeping-on-waitqueue 
>>>>>> when
>>>>>> -     * we access a paged-out frame, and that's post 4.1.0 now.
>>>>>> -     */
>>>>>> -#if 0
>>>>>> -    /*
>>>>>> -     * If the required guest memory is paged out, this function may 
>>>>>> sleep.
>>>>>> -     * Hence we bail immediately if called from atomic context.
>>>>>> -     */
>>>>>> -    if ( in_atomic() )
>>>>>> -        return HVMTRANS_unhandleable;
>>>>>> -#endif
>>>>> Dealing with this TODO item is of course much appreciated, but
>>>>> should it really be deleted altogether? The big-domain-lock issue
>>>>> is gone afair, in which case dropping the #if 0 would seem
>>>>> possible to me, even if it's not strictly needed without the sleep-
>>>>> on-waitqueue behavior mentioned.
>>>> I just had a look and found the following path:
>>>>
>>>> do_domctl() (takes domctl_lock and hypercall_deadlock_mutex)
>>>>   arch_do_domctl()
>>>>     raw_copy_from_guest()
>>>>       copy_from_user_hvm()
>>>>         hvm_copy_from_guest_linear()
>>>>           __hvm_copy()
>>>>
>>>> So no, we can't do the in_atomic() test IMO.
>>> Oh, right - that's a PVH constraint that could probably not even
>>> be thought of that the time the comment was written. I'm still
>>> of the opinion though that at least the still applicable part of
>>> the comment should be kept in place. Whether this means also
>>> keeping in_atomic() itself is then an independent question, i.e.
>>> I wouldn't consider it overly bad if there was no implementation
>>> in the tree, but the above still served as documentation of what
>>> would need to be re-added. Still my preference would be for it
>>> to be kept.
>> Would you be okay with replacing the removed stuff above with:
>>
>> /*
>>  * If the required guest memory is paged out this function may sleep.
>>  * So in theory we should bail out if called in atomic context.
>>  * Unfortunately this is true for PVH dom0 doing domctl calls which
> ... this is true at least for ...
>
>>  * holds the domctl lock when accessing dom0 memory. OTOH dom0 memory
>>  * should never be paged out, so we are fine without testing for
>>  * atomic context.
>>  */
> Not sure about this Dom0-specific remark: Are we certain there are
> no other paths, similar to the gnttab one having been mentioned till
> now?

Why is __hvm_copy() so special?  It is just one of many places which can
end up touching guest memory.

A comment here isn't going to help anyone who might find themselves with
problems.

Given that the test has never been used, and no issues have been raised,
and this path isn't AFAICT special, I don't see why it should be
special-cased.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.