[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.1-testing test] 8775: regressions - FAIL

flight 8775 xen-4.1-testing real [real]

Regressions :-(

Tests which did not succeed and are blocking:
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10     fail REGR. vs. 8771

Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-rhel6hvm-intel  9 guest-start.2                fail never pass
 test-amd64-i386-rhel6hvm-amd  9 guest-start.2                fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  6239209bb560
baseline version:
 xen                  be4b078e2d08

People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Marek Marczykowski <marmarek@xxxxxxxxxxxx>
  Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>

 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    

sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at

Test harness code can be found at

Not pushing.

changeset:   23147:6239209bb560
tag:         tip
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Wed Aug 31 15:32:47 2011 +0100
    xen: __hvm_pci_intx_assert should check for gsis remapped onto pirqs
    If the isa irq corresponding to a particular gsi is disabled while the
    gsi is enabled, __hvm_pci_intx_assert will always inject the gsi
    through the violapic, even if the gsi has been remapped onto a pirq.
    This patch makes sure that even in this case we inject the
    notification appropriately.
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    xen-unstable changeset:   23807:2297b90a6a7b
    xen-unstable date:        Wed Aug 31 15:23:34 2011 +0100
changeset:   23146:50496ccde3c3
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Wed Aug 31 15:32:24 2011 +0100
    xen: fix hvm_domain_use_pirq's behavior
    hvm_domain_use_pirq should return true when the guest is using a
    certain pirq, no matter if the corresponding event channel is
    currently enabled or disabled.  As an additional complication, qemu is
    going to request pirqs for passthrough devices even for Xen unaware
    HVM guests, so we need to wait for an event channel to be connected
    before considering the pirq of a passthrough device as "in use".
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    xen-unstable changeset:   23806:4226ea1785b5
    xen-unstable date:        Wed Aug 31 15:23:12 2011 +0100
changeset:   23145:1092a143ef9d
user:        Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
date:        Wed Aug 31 15:31:22 2011 +0100
    IRQ: manually EOI migrating line interrupts
    When migrating IO-APIC line level interrupts between PCPUs, the
    migration code rewrites the IO-APIC entry to point to the new
    CPU/Vector before EOI'ing it.
    The EOI process says that EOI'ing the Local APIC will cause a
    broadcast with the vector number, which the IO-APIC must listen to to
    clear the IRR and Status bits.
    In the case of migrating, the IO-APIC has already been
    reprogrammed so the EOI broadcast with the old vector fails to match
    the new vector, leaving the IO-APIC with an outstanding vector,
    preventing any more use of that line interrupt.  This causes a lockup
    especially when your root device is using PCI INTA (megaraid_sas
    driver *ehem*)
    However, the problem is mostly hidden because send_cleanup_vector()
    causes a cleanup of all moving vectors on the current PCPU in such a
    way which does not cause the problem, and if the problem has occured,
    the writes it makes to the IO-APIC clears the IRR and Status bits
    which unlocks the problem.
    This fix is distinctly a temporary hack, waiting on a cleanup of the
    irq code.  It checks for the edge case where we have moved the irq,
    and manually EOI's the old vector with the IO-APIC which correctly
    clears the IRR and Status bits.  Also, it protects the code which
    updates irq_cfg by disabling interrupts.
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    xen-unstable changeset:   23805:7048810180de
    xen-unstable date:        Wed Aug 31 15:19:24 2011 +0100
changeset:   23144:2ace86381b97
user:        Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
date:        Wed Aug 31 15:26:45 2011 +0100
    vpmu: Add processors Westmere E7-8837 and SandyBridge i5-2500 to the vpmu 
    Signed-off-by: Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
    xen-unstable changeset:   23803:51983821efa4
    xen-unstable date:        Wed Aug 31 15:17:45 2011 +0100
changeset:   23143:be4b078e2d08
user:        Marek Marczykowski <marmarek@xxxxxxxxxxxx>
date:        Sun Jun 05 16:55:21 2011 +0200
    libxl: Do not SEGV when no 'removable' disk parameter in xenstore
    Just assume disk as not removable when no 'removable' paremeter
    Signed-off-by: Marek Marczykowski <marmarek@xxxxxxxxxxxx>
    xen-unstable changest: 23607:2f63562df1c4
    Backport-requested-by: Marek Marczykowski <marmarek@xxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
(qemu changes not included)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.