[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable baseline-only test] 44414: regressions - trouble: blocked/broken/fail/pass

This run is configured for baseline tests only.

flight 44414 xen-unstable real [real]

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-xsm               4 capture-logs             !broken [st=!broken!]
 build-armhf-pvops             4 capture-logs             !broken [st=!broken!]
 build-armhf                   4 capture-logs             !broken [st=!broken!]
 test-amd64-amd64-amd64-pvgrub 10 guest-start              fail REGR. vs. 44407
 test-amd64-i386-xl-qemut-win7-amd64  9 windows-install    fail REGR. vs. 44407

Regressions which are regarded as allowable (not blocking):
 build-armhf-xsm               3 host-install(3)              broken like 44407
 build-armhf-pvops             3 host-install(3)              broken like 44407
 build-armhf                   3 host-install(3)              broken like 44407
 build-amd64-rumpuserxen       6 xen-build                    fail   like 44407
 build-i386-rumpuserxen        6 xen-build                    fail   like 44407
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop              fail like 44407
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail like 44407

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)               blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-midway    1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop             fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop             fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  cd2cd109e7db3a7e689c20b8991d41115ed5bea6
baseline version:
 xen                  c79fc6c4bee28b40948838a760b4aaadf6b5cd47

Last test of basis    44407  2016-05-12 02:20:16 Z    2 days
Testing same since    44414  2016-05-14 00:52:11 Z    0 days    1 attempts

People who touched revisions under test:
  Doug Goldstein <cardoe@xxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
  Olaf Hering <olaf@xxxxxxxxx>
  Paul Durrant <paul.durrant@xxxxxxxxxx>
  Wei Liu <wei.liu2@xxxxxxxxxx>

 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            broken  
 build-i386-pvops                                             pass    
 build-amd64-rumpuserxen                                      fail    
 build-i386-rumpuserxen                                       fail    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumpuserxen-amd64                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumpuserxen-i386                             blocked 
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-armhf-armhf-xl-midway                                   blocked 
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           pass    
 test-amd64-i386-xl-qemut-winxpsp3                            pass    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            pass    

sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at

Test harness code can be found at

broken build-armhf-xsm capture-logs !broken
broken-step build-armhf-xsm host-install(3)
broken-step build-armhf-pvops host-install(3)
broken build-armhf-pvops capture-logs !broken
broken-step build-armhf host-install(3)
broken build-armhf capture-logs !broken

Push not applicable.

commit cd2cd109e7db3a7e689c20b8991d41115ed5bea6
Author: Doug Goldstein <cardoe@xxxxxxxxxx>
Date:   Thu May 12 10:29:29 2016 -0500

    xendriverdomain: use POSIX sh and not bash
    The script doesn't use any bash-isms and works fine with BusyBox's ash.
    Signed-off-by: Doug Goldstein <cardoe@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 556c69f4efb09dd06e6bce4cbb0455287f19d02e
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu May 12 18:02:21 2016 +0200

    x86/PoD: skip eager reclaim when possible
    Reclaiming pages is pointless when the cache can already satisfy all
    outstanding PoD entries, and doing reclaims in that case can be very
    harmful to performance when that memory gets used by the guest, but
    only to store zeroes there.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>

commit 744fe0347d584f8b80b91ece93ef87e903c41bfa
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu May 12 14:24:39 2016 +0200

    Revert "blktap2: Use RING_COPY_REQUEST"
    This reverts commit 19f6c522a6a9599317ee1d8c4a155d1400d04c89. It
    did wrongly get associated with XSA-155, and was (rightfully) never
    backported to any of the stable trees. See also
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit c5ed88110cd1b72af643d7d9e255d587f2c90d3d
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Wed May 11 09:59:08 2016 -0400

    xsplice: Unmask (aka reinstall NMI handler) if we need to abort.
    If we have to abort in xsplice_spin() we end following
    the goto abort. But unfortunataly we neglected to unmask.
    This patch fixes that.
    Reported-by: Martin Pohlack <mpohlack@xxxxxxxxxx>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 1c7fa3dc039487d18ad0c6fb6b773c831dca5e5d
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Wed May 11 12:14:45 2016 +0100

    tools/xendomains: Create lockfile on start unconditionally
    At the moment, the xendomains init script will only create a lockfile
    if when started, it actually does something -- either tries to restore
    a previously saved domain as a result of XENDOMAINS_RESTORE, or tries
    to create a domain as a result of XENDOMAINS_AUTO.
    RedHat-based SYSV init systems try to only call "${SERVICE} shutdown"
    on systems which actually have an actively running component; and they
    use the existence of /var/lock/subsys/${SERVICE} to determine which
    systems are running.
    This means that at the moment, on RedHat-based SYSV systems (such as
    CentOS 6), if you enable xendomains, and have XENDOMAINS_RESTORE set
    to "true", but don't happen to start a VM, then your running VMs will
    not be suspended on shutdown.
    Since the lockfile doesn't really have any other effect than to
    prevent duplicate starting, just create it unconditionally every time
    we start the xendomains script.
    The other option would have been to touch the lockfile if
    XENDOMAINS_RESTORE was true regardless of whether there were any
    domains to be restored.  But this would mean that if you started with
    the xendomains script active but XENDOMAINS_RESTORE set to "false",
    and then changed it to "true", then xendomains would still not run the
    next time you shut down.  This seems to me to violate the principle of
    least surprise.
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Olaf Hering <olaf@xxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 1209ba4218ae03067c4d42392229263750efe814
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Wed May 11 12:14:44 2016 +0100

    hotplug: Fix xendomains lock path for RHEL-based systems
    Commit c996572 changed the LOCKFILE path from a check between two
    hardcoded paths (/var/lock/subsys/ or /var/lock) to using the
    XEN_LOCK_DIR variable designated at configure time.  Since
    XEN_LOCK_DIR doesn't (and shouldn't) have the 'subsys' postfix, this
    effectively moves all the lock files by default to /var/lock instead.
    Unfortunately, this breaks xendomains on RedHat-based SYSV init
    systems.  RedHat-based SYSV init systems try to only call "${SERVICE}
    shutdown" on systems which actually have an actively running
    component; and they use the existence of /var/lock/subsys/${SERVICE}
    to determine which systems are running.
    Changing XEN_LOCK_DIR to /var/lock/subsys is not suitable, as only
    system services like xendomains should create lockfiles there; other
    locks (such as the console locks) should be created in /var/lock
    Instead, re-instate the check for the subsys/ subdirectory of the lock
    directory in the xendomains script.
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Olaf Hering <olaf@xxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 46ed6a814c2867260c0ebd9a7399466c801637be
Author: Paul Durrant <paul.durrant@xxxxxxxxxx>
Date:   Mon May 9 17:43:14 2016 +0100

    tools: configure correct trace backend for QEMU
    Newer versions of the QEMU source have replaced the 'stderr' trace
    backend with 'log'. This patch adjusts the tools Makefile to test for
    the 'log' backend and specify it if it is available.
    Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit a6abcd8f758d968f6eb4d93ab37db4388eb9df7e
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Wed May 11 09:47:21 2016 +0200

    x86: correct remaining extended CPUID level checks
    We should consistently check the upper 16 bits to be equal 0x8000 and
    only then the full value to be >= the desired level.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit a24edf49f5195fc3ec54584e42a6cdef6d248221
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Wed May 11 09:46:43 2016 +0200

    x86: cap address bits CPUID output
    Don't use more or report more to guests than we are capable of
    At once
    - correct the involved extended CPUID level checks,
    - simplify the code in hvm_cpuid() and mtrr_top_of_ram().
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 5590bd17c474b3cff4a86216b17349a3045f6158
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Wed May 11 09:46:02 2016 +0200

    XSA-77: widen scope again
    As discussed on the hackathon, avoid us having to issue security
    advisories for issues affecting only heavily disaggregated tool stack
    setups, which no-one appears to use (or else they should step up to get
    things into shape).
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
(qemu changes not included)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.