[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 123379: regressions - FAIL


  • To: Juergen Gross <jgross@xxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 13 Jun 2018 09:58:58 +0100
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>
  • Delivery-date: Wed, 13 Jun 2018 08:59:05 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 13/06/18 09:52, Juergen Gross wrote:
> On 12/06/18 17:58, Juergen Gross wrote:
>> On 08/06/18 12:12, Juergen Gross wrote:
>>> On 07/06/18 13:30, Juergen Gross wrote:
>>>> On 06/06/18 11:40, Juergen Gross wrote:
>>>>> On 06/06/18 11:35, Jan Beulich wrote:
>>>>>>>>> On 05.06.18 at 18:19, <ian.jackson@xxxxxxxxxx> wrote:
>>>>>>>>>  test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 14 
>>>>>>>>> guest-saverestore.2 
>>>>>>> I thought I would reply again with the key point from my earlier mail
>>>>>>> highlighted, and go a bit further.  The first thing to go wrong in
>>>>>>> this was:
>>>>>>>
>>>>>>> 2018-05-30 22:12:49.320+0000: xc: Failed to get types for pfn batch (14 
>>>>>>> = Bad address): Internal error
>>>>>>> 2018-05-30 22:12:49.483+0000: xc: Save failed (14 = Bad address): 
>>>>>>> Internal error
>>>>>>> 2018-05-30 22:12:49.648+0000: libxl-save-helper: complete r=-1: Bad 
>>>>>>> address
>>>>>>>
>>>>>>> You can see similar messages in the other logfile:
>>>>>>>
>>>>>>> 2018-05-30 22:12:49.650+0000: libxl: 
>>>>>>> libxl_stream_write.c:350:libxl__xc_domain_save_done: Domain 3:saving 
>>>>>>> domain: domain responded to suspend request: Bad address
>>>>>>>
>>>>>>> All of these are reports of the same thing: xc_get_pfn_type_batch at
>>>>>>> xc_sr_save.c:133 failed with EFAULT.  I'm afraid I don't know why.
>>>>>>>
>>>>>>> There is no corresponding message in the host's serial log nor the
>>>>>>> dom0 kernel log.
>>>>>> I vaguely recall from the time when I had looked at the similar Windows
>>>>>> migration issues that the guest is already in the process of being 
>>>>>> cleaned
>>>>>> up when these occur. Commit 2dbe9c3cd2 ("x86/mm: silence a pointless
>>>>>> warning") intentionally suppressed a log message here, and the
>>>>>> immediately following debugging code (933f966bcd x86/mm: add
>>>>>> temporary debugging code to get_page_from_gfn_p2m()) was reverted
>>>>>> a little over a month later. This wasn't as a follow-up to another patch
>>>>>> (fix), but following the discussion rooted at
>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2017-06/msg00324.html
>>>>> That was -ESRCH, not -EFAULT.
>>>> I've looked a little bit more into this.
>>>>
>>>> As we are seeing EFAULT being returned by the hypervisor this either
>>>> means the tools are specifying an invalid address (quite unlikely)
>>>> or the buffers are not as MAP_LOCKED as we wish them to be.
>>>>
>>>> Is there a way to see whether the host was experiencing some memory
>>>> shortage, so the buffers might have been swapped out?
>>>>
>>>> man mmap tells me: "This implementation will try to populate (prefault)
>>>> the whole range but the mmap call doesn't fail with ENOMEM if this
>>>> fails. Therefore major faults might happen later on."
>>>>
>>>> And: "One should use mmap(2) plus mlock(2) when major faults are not
>>>> acceptable after the initialization of the mapping."
>>>>
>>>> With osdep_alloc_pages() in tools/libs/call/linux.c touching all the
>>>> hypercall buffer pages before doing the hypercall I'm not sure this
>>>> could be an issue.
>>>>
>>>> Any thoughts on that?
>>> Ian, is there a chance to dedicate a machine to a specific test trying
>>> to reproduce the problem? In case we manage to get this failure in a
>>> reasonable time frame I guess the most promising approach would be to
>>> use a test hypervisor producing more debug data. If you think this is
>>> worth doing I can write a patch.
>> Trying to reproduce the problem in a limited test environment finally
>> worked: doing a loop of "xl save -c" produced the problem after 198
>> iterations.
>>
>> I have asked a SUSE engineer doing kernel memory management if he
>> could think of something. His idea is that maybe some kthread could be
>> the reason for our problem, e.g. trying page migration or compaction
>> (at least on the test machine I've looked at compaction of mlocked
>> pages is allowed: /proc/sys/vm/compact_unevictable_allowed is 1).
>>
>> In order to be really sure nothing in the kernel can temporarily
>> switch hypercall buffer pages read-only or invalid for the hypervisor
>> we'll have to modify the privcmd driver interface: it will have to
>> gain knowledge which pages are handed over to the hypervisor as buffers
>> in order to be able to lock them accordingly via get_user_pages().
>>
>> While this is a possible explanation of the fault we are seeing it might
>> be related to another reason. So I'm going to apply some modifications
>> to the hypervisor to get some more diagnostics in order to verify the
>> suspected kernel behavior is really the reason for the hypervisor to
>> return EFAULT.
> I was lucky. Took only 39 iterations this time.
>
> The debug data confirms the theory that the kernel is setting the PTE to
> invalid or read only for a short amount of time:
>
> (XEN) fixup for address 00007ffb9904fe44, error_code 0002:
> (XEN) Pagetable walk from 00007ffb9904fe44:
> (XEN)  L4[0x0ff] = 0000000458da6067 0000000000019190
> (XEN)  L3[0x1ee] = 0000000457d26067 0000000000018210
> (XEN)  L2[0x0c8] = 0000000445ab3067 0000000000006083
> (XEN)  L1[0x04f] = 8000000458cdc107 000000000001925a
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0802abe31>] __copy_to_user_ll+0x27/0x30
> (XEN)    [<ffff82d080272edb>] arch_do_domctl+0x5a8/0x2648
> (XEN)    [<ffff82d080206d5d>] do_domctl+0x18fb/0x1c4e
> (XEN)    [<ffff82d08036d1ba>] pv_hypercall+0x1f4/0x43e
> (XEN)    [<ffff82d0803734a6>] lstar_enter+0x116/0x120
>
> The page was writable again when the page walk data has been collected,
> but A and D bits still are 0 (which should not be the case in case the
> kernel didn't touch the PTE, as the hypervisor read from that page some
> instructions before the failed write).
>
> Starting with the Xen patches now...

Given that walk, I'd expect the spurious pagefault logic to have kicked
in, and retried.

Presumably the spurious walk logic saw the non-present/read-only mappings?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.