[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.6-testing test] 110183: regressions - FAIL



flight 110183 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110183/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm       5 xen-install              fail REGR. vs. 109509

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-check    fail  like 109488
 test-xtf-amd64-amd64-3      45 xtf/test-hvm64-lbr-tsx-vmentry fail like 109509
 test-xtf-amd64-amd64-5      45 xtf/test-hvm64-lbr-tsx-vmentry fail like 109509
 test-xtf-amd64-amd64-2      45 xtf/test-hvm64-lbr-tsx-vmentry fail like 109509
 test-armhf-armhf-libvirt     13 saverestore-support-check    fail  like 109509
 test-armhf-armhf-xl-rtds     15 guest-start/debian.repeat    fail  like 109509
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop            fail like 109509
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop            fail like 109509
 test-armhf-armhf-libvirt-raw 12 saverestore-support-check    fail  like 109509
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop             fail like 109509
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop             fail like 109509
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-xtf-amd64-amd64-3       65 xtf/test-pv32pae-xsa-194     fail   never pass
 test-xtf-amd64-amd64-2       65 xtf/test-pv32pae-xsa-194     fail   never pass
 test-xtf-amd64-amd64-5       65 xtf/test-pv32pae-xsa-194     fail   never pass
 test-xtf-amd64-amd64-4       65 xtf/test-pv32pae-xsa-194     fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-install        fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-install        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-xtf-amd64-amd64-1       65 xtf/test-pv32pae-xsa-194     fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install         fail never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install         fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-install        fail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-install        fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install         fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install         fail never pass

version targeted for testing:
 xen                  314915cb4aa3865c8623516b65216b974a7d4e9a
baseline version:
 xen                  7496924db24a7946b0a81e20344920b4ac55921a

Last test of basis   109509  2017-05-17 00:49:13 Z   24 days
Testing same since   110183  2017-06-09 12:23:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxx>
  Gregory Herrero <gregory.herrero@xxxxxxxxxx>
  Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Kevin Tian <kevin.tian@xxxxxxxxx>
  Mohit Gambhir <mohit.gambhir@xxxxxxxxxx>
  Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
  Tim Deegan <tim@xxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          pass    
 build-i386-rumprun                                           pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 pass    
 test-amd64-amd64-xl-qemut-win10-i386                         fail    
 test-amd64-i386-xl-qemut-win10-i386                          fail    
 test-amd64-amd64-xl-qemuu-win10-i386                         fail    
 test-amd64-i386-xl-qemuu-win10-i386                          fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 314915cb4aa3865c8623516b65216b974a7d4e9a
Author: Gregory Herrero <gregory.herrero@xxxxxxxxxx>
Date:   Fri Jun 9 13:58:57 2017 +0200

    stop_machine: fill fn_result only in case of error
    
    When stop_machine_run() is called with NR_CPUS as last argument,
    fn_result member must be filled only if an error happens since it is
    shared across all cpus.
    
    Assume CPU1 detects an error and set fn_result to -1, then CPU2 doesn't
    detect an error and set fn_result to 0. The error detected by CPU1 will
    be ignored.
    
    Note that in case multiple failures occur on different CPUs, only the
    last error will be reported.
    
    Signed-off-by: Gregory Herrero <gregory.herrero@xxxxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    master commit: d8b833d78f6bfde9855a949b5e6d3790d78c0fb7
    master date: 2017-06-01 10:53:04 +0200

commit 866b2b274dd7dbe54bae0b27b07e150dd0c7233d
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Jun 9 13:58:40 2017 +0200

    arm: fix build with gcc 7
    
    The compiler dislikes duplicate "const", and the ones it complains
    about look like they we in fact meant to be placed differently.
    
    Also fix array_access_okay() (just like on x86), despite the construct
    being unused on ARM: -Wint-in-bool-context, enabled by default in
    gcc 7, doesn't like multiplication in conditional operators. "Hide" it,
    at the risk of the next compiler version becoming smarter and
    recognizing even that. (The hope is that added smartness then would
    also better deal with legitimate cases like the one here.) The change
    could have been done in access_ok(), but I think we better keep it at
    the place the compiler is actually unhappy about.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Julien Grall <julien.grall@xxxxxxx>
    master commit: 9d3011bd1cd29f8f3841bf1b64d5ead9ed1434e8
    master date: 2017-05-19 10:12:08 +0200

commit 7a46badcf4eaa337070a6e7dda61698fd5a32cb3
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Jun 9 13:58:11 2017 +0200

    x86: fix build with gcc 7
    
    -Wint-in-bool-context, enabled by default in gcc 7, doesn't like
    multiplication in conditional operators. "Hide" them, at the risk of
    the next compiler version becoming smarter and recognizing even those.
    (The hope is that added smartness then would also better deal with
    legitimate cases like the ones here.)
    
    The change could have been done in access_ok(), but I think we better
    keep it at the places the compiler is actually unhappy about.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: f32400e90c046a9fd76c8917a60d34ade9c02ea2
    master date: 2017-05-19 10:11:36 +0200

commit 38e8ab9e1c4f1f876481c2f2ebaf463c31fa7475
Author: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
Date:   Fri Jun 9 13:57:34 2017 +0200

    x86/mm: fix incorrect unmapping of 2MB and 1GB pages
    
    The same set of functions is used to set as well as to clean
    P2M entries, except that for clean operations INVALID_MFN (~0UL)
    is passed as a parameter. Unfortunately, when calculating an
    appropriate target order for a particular mapping INVALID_MFN
    is not taken into account which leads to 4K page target order
    being set each time even for 2MB and 1GB mappings. This eventually
    breaks down an EPT structure irreversibly into 4K mappings which
    prevents consecutive high order mappings to this area.
    
    Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    
    x86/NPT: deal with fallout from 2Mb/1Gb unmapping change
    
    Commit efa9596e9d ("x86/mm: fix incorrect unmapping of 2MB and 1GB
    pages") left the NPT code untouched, as there is no explicit alignment
    check matching the one in EPT code. However, the now more widespread
    storing of INVALID_MFN into PTEs requires adjustments:
    - calculations when shattering large pages may spill into the p2m type
      field (converting p2m_populate_on_demand to p2m_grant_map_rw) - use
      OR instead of PLUS,
    - the use of plain l{2,3}e_from_pfn() in p2m_pt_set_entry() results in
      all upper (flag) bits being clobbered - introduce and use
      p2m_l{2,3}e_from_pfn(), paralleling the existing L1 variant.
    
    Reported-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Tested-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: efa9596e9d167c8fb7d1c4446c10f7ca30453646
    master date: 2017-05-17 17:23:15 +0200
    master commit: 83520cb4aa39ebeb4eb1a7cac2e85b413e75a336
    master date: 2017-06-06 14:32:54 +0200

commit 13e84e665dccd908900043b7e2887a211bc08dfc
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Fri Jun 9 13:57:06 2017 +0200

    x86/pv: Align %rsp before pushing the failsafe stack frame
    
    Architecturally, all 64bit stacks are aligned on a 16 byte boundary before 
an
    exception frame is pushed.  The failsafe frame should not special in this
    regard.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: cbcaccb5e991155a4ae85a032e990614c3dc6960
    master date: 2017-05-09 19:00:20 +0100

commit ff3f674fa25116f68f24ae43ed2f44ed86d8ca71
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Fri Jun 9 13:56:40 2017 +0200

    x86/pv: Fix bugs with the handling of int80_bounce
    
    Testing has revealed two issues:
    
     1) Passing a NULL handle to set_trap_table() is intended to flush the 
entire
        table.  The 64bit guest case (and 32bit guest on 32bit Xen, when it
        existed) called init_int80_direct_trap() to reset int80_bounce, but c/s
        cda335c279 which introduced the 32bit guest on 64bit Xen support omitted
        this step.  Previously therefore, it was impossible for a 32bit guest to
        reset its registered int80_bounce details.
    
     2) init_int80_direct_trap() doesn't honour the guests request to have
        interrupts disabled on entry.  PVops Linux requests that interrupts are
        disabled, but Xen currently leaves them enabled when following the int80
        fastpath.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 55ab172a1f286742d918947ecb9b257ce31cc253
    master date: 2017-05-09 19:00:04 +0100

commit 267bf9f3ae9c0e0e7d0c103a8b826ce8f59bd0b1
Author: Mohit Gambhir <mohit.gambhir@xxxxxxxxxx>
Date:   Fri Jun 9 13:56:07 2017 +0200

    x86/vpmu_intel: fix hypervisor crash by masking PC bit in MSR_P6_EVNTSEL
    
    Setting Pin Control (PC) bit (19) in MSR_P6_EVNTSEL results in a General
    Protection Fault and thus results in a hypervisor crash. This behavior has
    been observed on two generations of Intel processors namely, Haswell and
    Broadwell. Other Intel processor generations were not tested. However, it
    does seem to be a possible erratum that hasn't yet been confirmed by Intel.
    
    To fix the problem this patch masks PC bit and returns an error in
    case any guest tries to write to it on any Intel processor. In addition
    to the fact that setting this bit crashes the hypervisor on Haswell and
    Broadwell, the PC flag bit toggles a hardware pin on the physical CPU
    every time the programmed event occurs and the hardware behavior in
    response to the toggle is undefined in the SDM, which makes this bit
    unsafe to be used by guests and hence should be masked on all machines.
    
    Signed-off-by: Mohit Gambhir <mohit.gambhir@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx>
    master commit: 8bf68dca65e2d61f4dfc6715cca51ad3dd5aadf1
    master date: 2017-05-08 13:37:17 +0200

commit 6fe723ef8c49dee3aea424d966f102429be745e4
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Jun 9 13:55:12 2017 +0200

    hvm: fix hypervisor crash in hvm_save_one()
    
    hvm_save_cpu_ctxt() returns success without writing any data into
    hvm_domain_context_t when all VCPUs are offline. This can then crash
    the hypervisor (with FATAL PAGE FAULT) in hvm_save_one() via the
    "off < (ctxt.cur - sizeof(*desc))" for() test, where ctxt.cur remains 0,
    causing an underflow which leads the hypervisor to go off the end of the
    ctxt buffer.
    
    This has been broken since Xen 4.4 (c/s e019c606f59).
    It has happened in practice with an HVM Linux VM (Debian 8) queried around
    shutdown:
    
    (XEN) hvm.c:1595:d3v0 All CPUs offline -- powering off.
    (XEN) ----[ Xen-4.9-rc  x86_64  debug=y   Not tainted ]----
    (XEN) CPU:    5
    (XEN) RIP:    e008:[<ffff82d0802496d2>] hvm_save_one+0x145/0x1fd
    (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor (d0v2)
    (XEN) rax: ffff830492cbb445   rbx: 0000000000000000   rcx: ffff83039343b400
    (XEN) rdx: 00000000ff88004d   rsi: fffffffffffffff8   rdi: 0000000000000000
    (XEN) rbp: ffff8304103e7c88   rsp: ffff8304103e7c48   r8:  0000000000000001
    (XEN) r9:  deadbeefdeadf00d   r10: 0000000000000000   r11: 0000000000000282
    (XEN) r12: 00007f43a3b14004   r13: 00000000fffffffe   r14: 0000000000000000
    (XEN) r15: ffff830400c41000   cr0: 0000000080050033   cr4: 00000000001526e0
    (XEN) cr3: 0000000402e13000   cr2: ffff830492cbb447
    (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
    (XEN) Xen code around <ffff82d0802496d2> (hvm_save_one+0x145/0x1fd):
    (XEN)  00 00 48 01 c8 83 c2 08 <66> 39 58 02 75 64 eb 08 48 89 c8 ba 08 00 
00 00
    (XEN) Xen stack trace from rsp=ffff8304103e7c48:
    (XEN)    0000041000000000 ffff83039343b400 ffff8304103e7c70 ffff8304103e7da8
    (XEN)    ffff830400c41000 00007f43a3b13004 ffff8304103b7000 ffffffffffffffea
    (XEN)    ffff8304103e7d48 ffff82d0802683d4 ffff8300d19fd000 ffff82d0802320d8
    (XEN)    ffff830400c41000 0000000000000000 ffff8304103e7cd8 ffff82d08026ff3d
    (XEN)    0000000000000000 ffff8300d19fd000 ffff8304103e7cf8 ffff82d080232142
    (XEN)    0000000000000000 ffff8300d19fd000 ffff8304103e7d28 ffff82d080207051
    (XEN)    ffff8304103e7d18 ffff830400c41000 0000000000000202 ffff830400c41000
    (XEN)    0000000000000000 00007f43a3b13004 0000000000000000 deadbeefdeadf00d
    (XEN)    ffff8304103e7e68 ffff82d080206c47 0700000000000000 ffff830410375bd0
    (XEN)    0000000000000296 ffff830410375c78 ffff830410375c80 0000000000000003
    (XEN)    ffff8304103e7e68 ffff8304103b67c0 ffff8304103b7000 ffff8304103b67c0
    (XEN)    0000000d00000037 0000000000000003 0000000000000002 00007f43a3b14004
    (XEN)    00007ffd5d925590 0000000000000000 0000000100000000 0000000000000000
    (XEN)    00000000ea8f8000 0000000000000000 00007ffd00000000 0000000000000000
    (XEN)    00007f43a276f557 0000000000000000 00000000ea8f8000 0000000000000000
    (XEN)    00007ffd5d9255e0 00007f43a23280b2 00007ffd5d926058 ffff8304103e7f18
    (XEN)    ffff8300d19fe000 0000000000000024 ffff82d0802053e5 deadbeefdeadf00d
    (XEN)    ffff8304103e7f08 ffff82d080351565 010000003fffffff 00007f43a3b13004
    (XEN)    deadbeefdeadf00d deadbeefdeadf00d deadbeefdeadf00d deadbeefdeadf00d
    (XEN)    ffff8800781425c0 ffff88007ce94300 ffff8304103e7ed8 ffff82d0802719ec
    (XEN) Xen call trace:
    (XEN)    [<ffff82d0802496d2>] hvm_save_one+0x145/0x1fd
    (XEN)    [<ffff82d0802683d4>] arch_do_domctl+0xa7a/0x259f
    (XEN)    [<ffff82d080206c47>] do_domctl+0x1862/0x1b7b
    (XEN)    [<ffff82d080351565>] pv_hypercall+0x1ef/0x42c
    (XEN)    [<ffff82d080355106>] entry.o#test_all_events+0/0x30
    (XEN)
    (XEN) Pagetable walk from ffff830492cbb447:
    (XEN)  L4[0x106] = 00000000dbc36063 ffffffffffffffff
    (XEN)  L3[0x012] = 0000000000000000 ffffffffffffffff
    (XEN)
    (XEN) ****************************************
    (XEN) Panic on CPU 5:
    (XEN) FATAL PAGE FAULT
    (XEN) [error_code=0000]
    (XEN) Faulting linear address: ffff830492cbb447
    (XEN) ****************************************
    
    At the same time pave the way for having zero-length records.
    
    Inspired by an earlier patch from Andrew and Razvan.
    
    Reported-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
    Diagnosed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Tested-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
    master commit: ed719d7ca6e8df6384a2ecbe9a78977e32586478
    master date: 2017-05-04 15:05:26 +0200

commit d48df033095afadd82435e591ce66433bc6d3be2
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Jun 9 13:54:27 2017 +0200

    x86/32on64: properly honor add-to-physmap-batch's size
    
    Commit 407a3c00ff ("compat/memory: fix build with old gcc") "fixed" a
    build issue by switching to the use of uninitialized data. Due to
    - the bounding of the uninitialized data item
    - the accessed area being outside of Xen space
    - arguments being properly verified by the native hypercall function
    this is not a security issue.
    
    Reported-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 144aec4140515c53bb1676df71a469f3e285c557
    master date: 2017-04-26 09:48:45 +0200
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.