[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable-smoke test] 87899: regressions - FAIL

flight 87899 xen-unstable-smoke real [real]

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   5 xen-build                 fail REGR. vs. 87376

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386  1 build-check(1)         blocked n/a
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a43f1e9b9d44eda4dd0338904ab422b4542bd031
baseline version:
 xen                  04119085f5a2a135e5161535b8821e1aa0d7db8a

Last test of basis    87376  2016-03-25 23:21:00 Z    3 days
Testing same since    87883  2016-03-29 13:02:02 Z    0 days    2 attempts

People who touched revisions under test:
  Anthony PERARD <anthony.perard@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Paul Durrant <paul.durrant@xxxxxxxxxx>
  Shannon Zhao <shannon.zhao@xxxxxxxxxx>
  Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>

 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386                     blocked 
 test-amd64-amd64-libvirt                                     blocked 

sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at

Explanation of these reports, and of osstest in general, is at

Test harness code can be found at

Not pushing.

commit a43f1e9b9d44eda4dd0338904ab422b4542bd031
Author: Shannon Zhao <shannon.zhao@xxxxxxxxxx>
Date:   Tue Mar 29 14:26:57 2016 +0200

    hvm/params: add a new delivery type for event-channel in 
    This new delivery type which is for ARM shares the same value with
    HVM_PARAM_CALLBACK_TYPE_VECTOR which is for x86.
    val[15:8] is flag: val[7:0] is a PPI.
    To the flag, bit 8 stands the interrupt mode is edge(1) or level(0) and
    bit 9 stands the interrupt polarity is active low(1) or high(0).
    Signed-off-by: Shannon Zhao <shannon.zhao@xxxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

commit b93687291574ee64b8244e15455a71d663787962
Author: Paul Durrant <paul.durrant@xxxxxxxxxx>
Date:   Tue Mar 29 14:26:33 2016 +0200

    x86/hvm/viridian: fix APIC assist page leak
    Commit a6f2cdb6 "keep APIC assist page mapped..." introduced a page
    leak because it relied on viridian_vcpu_deinit() always being called
    to release the page mapping. This does not happen in the case a normal
    domain shutdown.
    This patch fixes the problem by introducing a new function,
    viridian_domain_deinit(), which will iterate through the vCPUs and
    release any page mappings still present.
    Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 78c5f59ebd79117321a988c200048b5d94aa5df6
Author: Paul Durrant <paul.durrant@xxxxxxxxxx>
Date:   Tue Mar 29 14:26:03 2016 +0200

    x86/hvm/viridian: save APIC assist vector
    If any vcpu has a pending APIC assist when the domain is suspended
    then the vector needs to be saved. If this is not done then it's
    possible for the vector to remain pending in the vlapic ISR
    indefinitely after resume.
    This patch adds code to save the APIC assist vector value in the
    viridian vcpu save record. This means that the record is now zero-
    extended on load and, because this implies a loaded value of
    zero means nothing is pending (for backwards compatibility with
    hosts not implementing APIC assist), the rest of the viridian APIC
    assist code is adjusted to treat a zero value in this way. A
    check has therefore been added to viridian_start_apic_assist() to
    prevent the enlightenment being used for vectors < 0x10 (which
    are illegal for an APIC).
    Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 966a420c010355bd7c28f8a75e31e713715f6afa
Author: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Date:   Tue Mar 29 14:25:43 2016 +0200

    Anthony Perard to co-maintain qemu
    I nominate Anthony Perard as qemu-xen co-maintainer. He has been doing a
    lot of QEMU work over the years and in fact he is the original author of
    the Xen enablement code in upstream QEMU.
    As qemu-xen co-maintainer, he could help me manage the qemu-xen trees
    and promptly backport all the relevant commits from upstream QEMU.
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    Acked-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>

commit 7bd9dc3adfbb014c55f0928ebb3b20950ca9c019
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Mar 29 14:24:26 2016 +0200

    x86: fix information leak on AMD CPUs
    The fix for XSA-52 was wrong, and so was the change synchronizing that
    new behavior to the FXRSTOR logic: AMD's manuals explictly state that
    writes to the ES bit are ignored, and it instead gets calculated from
    the exception and mask bits (it gets set whenever there is an unmasked
    exception, and cleared otherwise). Hence we need to follow that model
    in our workaround.
    This is CVE-2016-3158 / CVE-2016-3159 / XSA-172.
    [xen/arch/x86/xstate.c:xrstor: CVE-2016-3158]
    [xen/arch/x86/i387.c:fpu_fxrstor: CVE-2016-3159]
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
(qemu changes not included)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.