[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable test] 8776: regressions - FAIL

flight 8776 xen-unstable real [real]

Regressions :-(

Tests which did not succeed and are blocking:
 test-amd64-i386-rhel6hvm-intel  4 xen-install              fail REGR. vs. 8769
 test-amd64-i386-xl-multivcpu  4 xen-install                fail REGR. vs. 8769
 test-amd64-i386-xl            4 xen-install                fail REGR. vs. 8769
 test-amd64-i386-xl-credit2    4 xen-install                fail REGR. vs. 8769
 test-amd64-i386-pair          6 xen-install/dst_host       fail REGR. vs. 8769
 test-amd64-i386-pair          5 xen-install/src_host       fail REGR. vs. 8769
 build-i386                    4 xen-build                  fail REGR. vs. 8769
 test-amd64-i386-pv            4 xen-install                fail REGR. vs. 8769
 build-i386-oldkern            4 xen-build                  fail REGR. vs. 8769
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10     fail REGR. vs. 8760
 test-amd64-i386-win-vcpus1    4 xen-install                fail REGR. vs. 8769
 test-amd64-i386-rhel6hvm-amd  4 xen-install                fail REGR. vs. 8769
 test-amd64-i386-win           4 xen-install                fail REGR. vs. 8769
 test-amd64-i386-xl-win-vcpus1  4 xen-install               fail REGR. vs. 8769

Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  4a4882df5649
baseline version:
 xen                  ac9aa65050e9

People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Christoph Egger <Christoph.Egger@xxxxxxx>
  Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Keir Fraser <keir@xxxxxxx>
  Kevin Tian <kevin.tian@xxxxxxxxx>
  Laszlo Ersek <lersek@xxxxxxxxxx>
  Olaf Hering <olaf@xxxxxxxxx>
  Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>

 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 

sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at

Test harness code can be found at

Not pushing.

changeset:   23808:4a4882df5649
tag:         tip
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Wed Aug 31 15:23:49 2011 +0100
    xen: get_free_pirq: make sure that the returned pirq is allocated
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
changeset:   23807:2297b90a6a7b
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Wed Aug 31 15:23:34 2011 +0100
    xen: __hvm_pci_intx_assert should check for gsis remapped onto pirqs
    If the isa irq corresponding to a particular gsi is disabled while the
    gsi is enabled, __hvm_pci_intx_assert will always inject the gsi
    through the violapic, even if the gsi has been remapped onto a pirq.
    This patch makes sure that even in this case we inject the
    notification appropriately.
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
changeset:   23806:4226ea1785b5
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Wed Aug 31 15:23:12 2011 +0100
    xen: fix hvm_domain_use_pirq's behavior
    hvm_domain_use_pirq should return true when the guest is using a
    certain pirq, no matter if the corresponding event channel is
    currently enabled or disabled.  As an additional complication, qemu is
    going to request pirqs for passthrough devices even for Xen unaware
    HVM guests, so we need to wait for an event channel to be connected
    before considering the pirq of a passthrough device as "in use".
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
changeset:   23805:7048810180de
user:        Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
date:        Wed Aug 31 15:19:24 2011 +0100
    IRQ: manually EOI migrating line interrupts
    When migrating IO-APIC line level interrupts between PCPUs, the
    migration code rewrites the IO-APIC entry to point to the new
    CPU/Vector before EOI'ing it.
    The EOI process says that EOI'ing the Local APIC will cause a
    broadcast with the vector number, which the IO-APIC must listen to to
    clear the IRR and Status bits.
    In the case of migrating, the IO-APIC has already been
    reprogrammed so the EOI broadcast with the old vector fails to match
    the new vector, leaving the IO-APIC with an outstanding vector,
    preventing any more use of that line interrupt.  This causes a lockup
    especially when your root device is using PCI INTA (megaraid_sas
    driver *ehem*)
    However, the problem is mostly hidden because send_cleanup_vector()
    causes a cleanup of all moving vectors on the current PCPU in such a
    way which does not cause the problem, and if the problem has occured,
    the writes it makes to the IO-APIC clears the IRR and Status bits
    which unlocks the problem.
    This fix is distinctly a temporary hack, waiting on a cleanup of the
    irq code.  It checks for the edge case where we have moved the irq,
    and manually EOI's the old vector with the IO-APIC which correctly
    clears the IRR and Status bits.  Also, it protects the code which
    updates irq_cfg by disabling interrupts.
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
changeset:   23804:42d76c68b2bf
user:        Kevin Tian <kevin.tian@xxxxxxxxx>
date:        Wed Aug 31 15:18:23 2011 +0100
    x86: add irq count for IPIs
    such count is useful to assist decision make in cpuidle governor,
    while w/o this patch only device interrupts through do_IRQ is
    currently counted.
    Signed-off-by: Kevin Tian <kevin.tian@xxxxxxxxx>
changeset:   23803:51983821efa4
user:        Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
date:        Wed Aug 31 15:17:45 2011 +0100
    vpmu: Add processors Westmere E7-8837 and SandyBridge i5-2500 to the vpmu 
    Signed-off-by: Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
changeset:   23802:bb9b81008733
user:        Laszlo Ersek <lersek@xxxxxxxxxx>
date:        Wed Aug 31 15:16:14 2011 +0100
    x86: Increase the default NR_CPUS to 256
    Changeset 21012:ef845a385014 bumped the default to 128 about one and a
    half years ago. Increase it now to 256, as systems with eg. 160
    logical CPUs are becoming (have become) common.
    Signed-off-by: Laszlo Ersek <lersek@xxxxxxxxxx>
changeset:   23801:d54cfae72cd1
user:        Christoph Egger <Christoph.Egger@xxxxxxx>
date:        Wed Aug 31 15:15:41 2011 +0100
    nestedsvm: VMRUN doesn't use nextrip
    VMRUN does not use nextrip. So remove pointless assignment.
    Signed-off-by: Christoph Egger <Christoph.Egger@xxxxxxx>
changeset:   23800:72edc40e2942
user:        Keir Fraser <keir@xxxxxxx>
date:        Wed Aug 31 15:14:49 2011 +0100
    x86-64: Fix off-by-one error in __addr_ok() macro
    Signed-off-by: Laszlo Ersek <lersek@xxxxxxxxxx>
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
changeset:   23799:ac9aa65050e9
parent:      23798:469aa1fbd843
parent:      23797:2c687e70a343
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Tue Aug 30 11:46:58 2011 +0100
commit cd776ee9408ff127f934a707c1a339ee600bc127
Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Date:   Tue Jun 28 13:50:53 2011 +0100

    qemu-char.c: fix incorrect CONFIG_STUBDOM handling
    qemu-char.c:1123:7: warning: "CONFIG_STUBDOM" is not defined [-Wundef]
    Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.