[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.2-testing test] 13735: trouble: preparing/queued/running



flight 13735 xen-4.2-testing running [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13735/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-sedf-pin    <none executed>              queued
 test-amd64-i386-xl-credit2      <none executed>              queued
 test-amd64-i386-pv              <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-amd64-xl-pcipt-intel    <none executed>              queued
 test-amd64-amd64-xl-sedf        <none executed>              queued
 test-amd64-i386-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-pv             <none executed>              queued
 test-amd64-i386-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-i386-i386-xl               <none executed>              queued
 test-i386-i386-pair             <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-i386-i386-xl-qemuu-winxpsp3    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-win7-amd64    <none executed>              queued
 test-i386-i386-xl-win           <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-i386-xend-winxpsp3    <none executed>              queued
 test-i386-i386-win              <none executed>              queued
 test-amd64-i386-xl-win7-amd64    <none executed>              queued
 test-i386-i386-xl-winxpsp3      <none executed>              queued
 test-amd64-i386-win-vcpus1      <none executed>              queued
 test-i386-i386-pv               <none executed>              queued
 test-amd64-i386-win             <none executed>              queued
 test-amd64-i386-xl-winxpsp3-vcpus1    <none executed>              queued
 build-amd64                   4 xen-build                running [st=running!]
 build-i386                    1 hosts-allocate           running [st=running!]
 test-amd64-i386-xl-win-vcpus1    <none executed>              queued
 build-i386-pvops              1 hosts-allocate           running [st=running!]
 build-amd64-pvops             4 kernel-build             running [st=running!]
 build-i386-oldkern            1 hosts-allocate           running [st=running!]
 test-amd64-amd64-xl-qemuu-winxpsp3    <none executed>              queued
 build-amd64-oldkern           4 xen-build                running [st=running!]
 test-amd64-amd64-xl-winxpsp3    <none executed>              queued
 test-amd64-amd64-win            <none executed>              queued
 test-amd64-amd64-xl-win         <none executed>              queued

version targeted for testing:
 xen                  9cec8d14a1ea
baseline version:
 xen                  7f993b289dc4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Keir Fraser <keir@xxxxxxx>
  Olaf Hering <olaf@xxxxxxxxx>
  Pasi K?rkk?inen <pasik@xxxxxx>
------------------------------------------------------------

jobs:
 build-amd64                                                  running 
 build-i386                                                   preparing
 build-amd64-oldkern                                          running 
 build-i386-oldkern                                           preparing
 build-amd64-pvops                                            running 
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-i386-i386-xl                                            queued  
 test-amd64-i386-rhel6hvm-amd                                 queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-amd64-xl-win7-amd64                               queued  
 test-amd64-i386-xl-win7-amd64                                queued  
 test-amd64-i386-xl-credit2                                   queued  
 test-amd64-amd64-xl-pcipt-intel                              queued  
 test-amd64-i386-rhel6hvm-intel                               queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-i386-xl-multivcpu                                 queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-i386-i386-pair                                          queued  
 test-amd64-amd64-xl-sedf-pin                                 queued  
 test-amd64-amd64-pv                                          queued  
 test-amd64-i386-pv                                           queued  
 test-i386-i386-pv                                            queued  
 test-amd64-amd64-xl-sedf                                     queued  
 test-amd64-i386-win-vcpus1                                   queued  
 test-amd64-i386-xl-win-vcpus1                                queued  
 test-amd64-i386-xl-winxpsp3-vcpus1                           queued  
 test-amd64-amd64-win                                         queued  
 test-amd64-i386-win                                          queued  
 test-i386-i386-win                                           queued  
 test-amd64-amd64-xl-win                                      queued  
 test-i386-i386-xl-win                                        queued  
 test-amd64-amd64-xl-qemuu-winxpsp3                           queued  
 test-i386-i386-xl-qemuu-winxpsp3                             queued  
 test-amd64-i386-xend-winxpsp3                                queued  
 test-amd64-amd64-xl-winxpsp3                                 queued  
 test-i386-i386-xl-winxpsp3                                   queued  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25837:9cec8d14a1ea
tag:         tip
user:        Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
date:        Wed Sep 12 19:33:18 2012 +0100
    
    x86/passthrough: Fix corruption caused by race conditions between
    device allocation and deallocation to a domain.
    
    A toolstack, when dealing with a domain using PCIPassthrough, could
    reasonably be expected to issue DOMCTL_deassign_device hypercalls to
    remove all passed through devices before issuing a
    DOMCTL_destroydomain hypercall to kill the domain.  In the case where
    a toolstack is perhaps less sensible in this regard, the hypervisor
    should not fall over.
    
    In domain_kill(), pci_release_devices() searches the alldevs_list list
    looking for PCI devices still assigned to the domain.  If the
    toolstack has correctly deassigned all devices before killing the
    domain, this loop does nothing.
    
    However, if there are still devices attached to the domain, the loop
    will call pci_cleanup_msi() without unbinding the pirq from the
    domain.  This eventually calls destroy_irq() which xfree()'s the
    action.
    
    However, as the irq_desc->action pointer is abused in an unsafe
    matter, without unbinding first (which at least correctly cleans up),
    the action is actually an irq_guest_action_t* rather than an
    irqaction*, meaning that the cpu_eoi_map is leaked, and eoi_timer is
    free()'d while still being on a pcpu's inactive_timer list.  As a
    result, when this free()'d memory gets reused, the inactive_timer list
    becomes corrupt, and list_*** operations will corrupt hypervisor
    memory.
    
    If the above were not bad enough, the loop in pci_release_devices()
    still leaves references to the irq it destroyed in
    domain->arch.pirq_irq and irq_pirq, meaning that a later loop,
    free_domain_pirqs(), which happens as a result of
    complete_domain_destroy() will unbind and destroy all irqs which were
    still bound to the domain, resulting in a double destroy of any irq
    which was still bound to the domain at the point at which the
    DOMCTL_destroydomain hypercall happened.
    
    Because of the allocation of irqs from find_unassigned_irq(), the
    lowest free irq number is going to be handed back from create_irq().
    
    There is a further race condition between the original (incorrect)
    call to destroy_irq() from pci_release_devices(), and the later call
    to free_domain_pirqs() (which happens in a softirq context at some
    point after the domain has officially died) during which the same irq
    number (which is still referenced in a stale way in
    domain->arch.pirq_irq and irq_pirq) has been allocated to a new domain
    via a PHYSDEVOP_map_pirq hypercall (Say perhaps in the case of
    rebooting a domain).
    
    In this case, the cleanup for the dead domain will free the recently
    bound irq under the feet of the new domain.  Furthermore, after the
    irq has been incorrectly destroyed, the same domain with another
    PHYSDEVOP_map_pirq hypercall can be allocated the same irq number as
    before, leading to an error along the lines of:
    
    ../physdev.c:188: dom54: -1:-1 already mapped to 74
    
    In this case, the pirq_irq and irq_pirq mappings get updated to the
    new PCI device from the latter PHYSDEVOP_map_pirq hypercall, and the
    IOMMU interrupt remapping registers get updated, leading to IOMMU
    Primary Pending Fault due to source-id verification failure for
    incoming interrupts from the passed through device.
    
    
    The easy fix is to simply deassign the device in pci_release_devices()
    and leave all the real cleanup to the free_domain_pirqs() which
    correctly unbinds and destroys the irq without leaving stale
    references around.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Committed-by: Keir Fraser <keir@xxxxxxx>
    xen-unstable changeset:   25883:4fdaebea82d7
    xen-unstable date:        Wed Sep 12 19:31:16 2012 +0100
    
    
changeset:   25836:7550c9b55af2
user:        Pasi K?rkk?inen <pasik@xxxxxx>
date:        Wed Sep 12 19:03:50 2012 +0100
    
    xl.cfg: gfx_passthru documentation improvements
    
    gfx_passthru: Document gfx_passthru makes the GPU become primary in
    the guest
    and other generic info about gfx_passthru.
    
    Signed-off-by: Pasi K?rkk?inen <pasik@xxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Committed-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    xen-unstable changeset:   25839:2dfea3dff550
    xen-unstable date:        Mon Sep 10 11:13:54 2012 +0100
    
    
changeset:   25835:7f993b289dc4
user:        Olaf Hering <olaf@xxxxxxxxx>
date:        Wed Sep 12 14:48:04 2012 +0100
    
    unmodified_drivers: handle IRQF_SAMPLE_RANDOM
    
    The flag IRQF_SAMPLE_RANDOM was removed in 3.6-rc1. Add it only if it
    is
    defined. An additional call to add_interrupt_randomness is appearently
    not needed because its now called unconditionally in
    handle_irq_event_percpu().
    
    Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
    Committed-by: Jan Beulich <jbeulich@xxxxxxxx>
    xen-unstable changeset:   25837:87cb4b6f53d3
    xen-unstable date:        Mon Sep 10 10:54:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.