[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/nested-hap: Fix handling of L0_ERROR


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Tue, 19 Nov 2019 20:45:55 +0000
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 19 Nov 2019 20:46:11 +0000
  • Ironport-sdr: nlHFhdPhrho6pIFk3lhsgG7eETjlwJJB8sCT1I+JlKWR7dZkgXKL53gn0auuRoLpRQ892IZwBN 0LAdLzW4IBrBtDc+fR3dva/P5LqnjPIDDKfTEy7Xzks2wbsq4hNfq8bawMZ3Ptf5hiP545+ObW OHyxfq9ZFxT00JylAk2UvWtUx3G/XiKTmc6mTb6odRHwMIH54m+J8OYjMDK1nfZOjPUlVIgPGN 7DscbX42nGnqktDR50idMxFBEtHksRke5WYnrrrMhM7W+/neeAPF3prQZhwWnsdfXLmoYjsOu0 t34=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 19/11/2019 15:23, Jan Beulich wrote:
> On 19.11.2019 15:58, Andrew Cooper wrote:
>> On 19/11/2019 11:13, Jan Beulich wrote:
>>> On 18.11.2019 19:15, Andrew Cooper wrote:
>>> I take it you imply that L0_ERROR would need raising (as per the
>>> auxiliary code fragment adding the "(access_x && *page_order)"
>>> check), but I wonder whether that would really be correct. This
>>> depends on what L0_ERROR really is supposed to mean: An error
>>> because of actual L0 settings (x=0 in our case), or an error
>>> because of intended L0 settings (x=1 in our case). After all a
>>> violation of just the p2m_access (which also affects r/w/x)
>>> doesn't get reported by nestedhap_walk_L0_p2m() as L0_ERROR
>>> either (and hence would, as it seems to me, lead to a similar
>>> live lock).
>>>
>>> Therefore I wonder whether your initial idea of doing the
>>> shattering right here wouldn't be the better course of action.
>>> nestedhap_fix_p2m() could either install the large page and then
>>> shatter it right away, or it could install just the individual
>>> small page. Together with the different npfec adjustment model
>>> suggested below (leading to npfec.present to also get updated in
>>> the DONE case) a similar "insn-fetch && present" conditional (to
>>> that introduced for XSA-304) could then be used there.
>>>
>>> Even better - by making the violation checking around the
>>> original XSA-304 addition a function (together with the 304
>>> addition), such a function might then be reusable here. This
>>> might then address the p2m_access related live lock as well.
>> This is all unrelated to the patch.
> I don't think so.

This patch is not a fix for the XSA-304 livelock.

It is a independent bug discovered while investigating the livelock.

It may, or may not, form part of the XSA-304 livelock bugfix, depending
on how the rest of the investigation goes.

>  At the very least defining what exactly L0_ERROR
> is intended to mean is pretty relevant here.

The intent of the code is clear (at least, to me).

It means #NPF/EPT_VIOLATION/EPT_MISCONFIG in the L01 part of the nested
walk.

>>>> @@ -181,6 +180,18 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t 
>>>> L1_gpa, paddr_t *L0_gpa,
>>>>      *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
>>>>  out:
>>>>      __put_gfn(p2m, L1_gpa >> PAGE_SHIFT);
>>>> +
>>>> +    /*
>>>> +     * When reporting L0_ERROR, rewrite nfpec to match what would have 
>>>> occured
>>>> +     * if hardware had walked the L0, rather than the combined L02.
>>>> +     */
>>>> +    if ( rc == NESTEDHVM_PAGEFAULT_L0_ERROR )
>>>> +    {
>>>> +        npfec->present = !mfn_eq(mfn, INVALID_MFN);
>>> To be in line with the conditional a few lines up from here,
>>> wouldn't this better be !mfn_valid(mfn)?
>> That's not how the return value from get_gfn_*() works, and would break
>> the MMIO case.
> How that (for the latter part of your reply)? The MMIO case produces
> NESTEDHVM_PAGEFAULT_DIRECT_MMIO, i.e. doesn't even enter this if().
> Hence my remark elsewhere that the MMIO cases isn't taken care of in
> the first place.
>
>>> Should there ever be a case to clear the flag when it was set? If
>>> a mapping has gone away between the time the exit condition was
>>> detected and the time we re-evaluate things here, I think it
>>> should still report "present" back to the caller.
>> No - absolutely not.  We must report the property of the L0 walk, as we
>> found it.
>>
>> Pretending it was present when it wasn't is a sure-fire way of leaving
>> further bugs lurking.
> But if npfec.present is set, it surely was set at the time of the
> hardware walk. And _that's_ what npfec is supposed to represent.
>
>>>  Taking both
>>> remarks together I'm thinking of
>>>
>>>         if ( mfn_valid(mfn) )
>>>             npfec->present = 1;
>>>
>>>> +        npfec->gla_valid = 0;
>>> For this, one the question is whose linear address is meant here.
>> The linear address (which was L2's) is nonsensical when we've taken an
>> L0 fault.  This is why it is clobbered unconditionally.
> And this is also why I was saying ...
>
>>> If it's L2's, then it shouldn't be cleared. If it's L1's, then
>>> it would seem to me that it should have been avoided to set the
>>> field, or at least it should have been cleared the moment we're
>>> past L12 handling.
> ... this. If it's nonsensical, it shouldn't have been set to begin
> with, or be squashed earlier than here.

There seems to be a lot of confusion here.

This is the correct place to discard it.

Hardware did a real walk of L02 and got a real gpa and npfec (optionally
with a real gla), that overall identified "something went wrong".

Upon interpreting "what went wrong", Xen may decide that it is a problem
in the L01 walk, rather than the L12 or combined L02.

A problem in the L01 walk is handled by returning L0_ERROR back to the
common code, discarding the current NPF/EPT_VIOLATION/MISCONFIG context,
and synthesizing the state that would have occurred if hardware were to
have performed the L01 walk instead of L02, so it can be correctly
interpreted by the common code on the hostp2m.

gpa gets adjusted.  npfec doesn't (and the subject of this patch).  gla
doesn't even get passed in for potential adjustments.

The gla isn't actually an interesting value, and Xen's use of it for
various cache maintenance purposes looks buggy.  Gla is specific to the
L2 guests' register state and virtual memory layout, and in particular,
has no bearing on anything where we've decided that we need a correction
to the L01 mapping.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.