[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.5-testing test] 62022: regressions - FAIL



flight 62022 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/62022/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd        9 debian-di-install         fail REGR. vs. 61513

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-vhd 14 guest-saverestore.2 fail in 61844 pass in 62022
 test-armhf-armhf-xl-arndale   9 debian-install     fail in 61844 pass in 62022
 test-amd64-i386-xl-qemuu-winxpsp3 13 guest-localmigrate fail in 61844 pass in 
62022
 test-amd64-i386-rumpuserxen-i386 10 guest-start             fail pass in 61844
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-localmigrate  fail pass in 61844
 test-amd64-amd64-xl-qemuu-winxpsp3 15 guest-localmigrate.2  fail pass in 61844

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-libvirt-vhd  9 debian-di-install         fail REGR. vs. 61513
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail blocked in 61513
 test-amd64-amd64-libvirt-pair 21 guest-migrate/src_host/dst_host fail in 61844 
like 61513
 test-amd64-i386-libvirt-pair 21 guest-migrate/src_host/dst_host fail in 61844 
like 61513
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop    fail in 61844 like 61513
 test-amd64-i386-libvirt-raw   9 debian-di-install            fail   like 61513
 test-amd64-amd64-libvirt-raw  9 debian-di-install            fail   like 61513
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail like 61513
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail like 61513

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-armhf-armhf-xl-vhd       9 debian-di-install            fail   never pass
 test-amd64-amd64-xl-qcow2     9 debian-di-install            fail   never pass
 test-armhf-armhf-xl-raw       9 debian-di-install            fail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install            fail never pass
 build-amd64-prev              5 xen-build                    fail   never pass
 build-i386-prev               5 xen-build                    fail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-install            fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-amd64-i386-libvirt-qcow2 11 migrate-support-check        fail  never pass
 test-amd64-amd64-libvirt-qcow2 11 migrate-support-check        fail never pass
 test-amd64-i386-libvirt-vhd  11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-vhd  9 debian-di-install            fail   never pass
 test-armhf-armhf-xl-qcow2     9 debian-di-install            fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ffb4e6387f489b6b5ce287f51db43cb37ebae064
baseline version:
 xen                  ef89dc8c00087c8c1819e60bdad5527db70425e1

Last test of basis    61513  2015-09-07 11:42:18 Z    9 days
Failing since         61751  2015-09-10 14:07:54 Z    6 days    3 attempts
Testing same since    61844  2015-09-12 15:58:20 Z    4 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Anshul Makkar <anshul.makkar@xxxxxxxxxx>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx>
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Jim Fehlig <jfehlig@xxxxxxxx>
  Julien Grall <julien.grall@xxxxxxxxxx>
  Roger Pau Monné <roger.pau@xxxxxxxxxx>
  Wei Liu <wei.liu2@xxxxxxxxxx>

jobs:
 build-amd64                                                  pass
 build-armhf                                                  pass
 build-i386                                                   pass
 build-amd64-libvirt                                          pass
 build-armhf-libvirt                                          pass
 build-i386-libvirt                                           pass
 build-amd64-prev                                             fail
 build-i386-prev                                              fail
 build-amd64-pvops                                            pass
 build-armhf-pvops                                            pass
 build-i386-pvops                                             pass
 build-amd64-rumpuserxen                                      pass
 build-i386-rumpuserxen                                       pass
 test-amd64-amd64-xl                                          pass
 test-armhf-armhf-xl                                          pass
 test-amd64-i386-xl                                           pass
 test-amd64-amd64-xl-pvh-amd                                  fail
 test-amd64-i386-qemut-rhel6hvm-amd                           pass
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass
 test-amd64-i386-freebsd10-amd64                              pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass
 test-amd64-amd64-rumpuserxen-amd64                           pass
 test-amd64-amd64-xl-qemut-win7-amd64                         fail
 test-amd64-i386-xl-qemut-win7-amd64                          fail
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail
 test-amd64-i386-xl-qemuu-win7-amd64                          fail
 test-armhf-armhf-xl-arndale                                  pass
 test-amd64-amd64-xl-credit2                                  pass
 test-armhf-armhf-xl-credit2                                  pass
 test-armhf-armhf-xl-cubietruck                               pass
 test-amd64-i386-freebsd10-i386                               pass
 test-amd64-i386-rumpuserxen-i386                             fail
 test-amd64-amd64-xl-pvh-intel                                fail
 test-amd64-i386-qemut-rhel6hvm-intel                         pass
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass
 test-amd64-amd64-libvirt                                     pass
 test-armhf-armhf-libvirt                                     fail
 test-amd64-i386-libvirt                                      pass
 test-amd64-amd64-migrupgrade                                 blocked
 test-amd64-i386-migrupgrade                                  blocked
 test-amd64-amd64-xl-multivcpu                                pass
 test-armhf-armhf-xl-multivcpu                                pass
 test-amd64-amd64-pair                                        pass
 test-amd64-i386-pair                                         pass
 test-amd64-amd64-libvirt-pair                                pass
 test-amd64-i386-libvirt-pair                                 pass
 test-amd64-amd64-amd64-pvgrub                                pass
 test-amd64-amd64-i386-pvgrub                                 pass
 test-amd64-amd64-pygrub                                      pass
 test-amd64-amd64-libvirt-qcow2                               pass
 test-armhf-armhf-libvirt-qcow2                               fail
 test-amd64-i386-libvirt-qcow2                                pass
 test-amd64-amd64-xl-qcow2                                    fail
 test-armhf-armhf-xl-qcow2                                    fail
 test-amd64-i386-xl-qcow2                                     pass
 test-amd64-amd64-libvirt-raw                                 fail
 test-armhf-armhf-libvirt-raw                                 fail
 test-amd64-i386-libvirt-raw                                  fail
 test-amd64-amd64-xl-raw                                      pass
 test-armhf-armhf-xl-raw                                      fail
 test-amd64-i386-xl-raw                                       pass
 test-amd64-amd64-xl-rtds                                     pass
 test-armhf-armhf-xl-rtds                                     fail
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass
 test-amd64-amd64-libvirt-vhd                                 fail
 test-armhf-armhf-libvirt-vhd                                 fail
 test-amd64-i386-libvirt-vhd                                  pass
 test-amd64-amd64-xl-vhd                                      pass
 test-armhf-armhf-xl-vhd                                      fail
 test-amd64-i386-xl-vhd                                       fail
 test-amd64-amd64-xl-qemut-winxpsp3                           pass
 test-amd64-i386-xl-qemut-winxpsp3                            pass
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail
 test-amd64-i386-xl-qemuu-winxpsp3                            pass


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ffb4e6387f489b6b5ce287f51db43cb37ebae064
Author: Wei Liu <wei.liu2@xxxxxxxxxx>
Date:   Tue Jul 14 17:41:10 2015 +0100

    xl: correct handling of extra_config in main_cpupoolcreate

    Don't dereference extra_config if it's NULL. Don't leak extra_config in
    the end.

    Also fixed a typo in error string while I was there.

    Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    (cherry picked from commit 705c9e12426cba82804cb578fc70785281655d94)

commit 2049db36021e305510adc09c8388208c784c7522
Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Date:   Fri Sep 11 11:51:59 2015 +0100

    QEMU_TAG update

commit 0b6e02bd3b589c50d3d8111e91b9510d226e7e40
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Sep 10 15:36:12 2015 +0200

    x86/NUMA: make init_node_heap() respect Xen heap limit

    On NUMA systems, where we try to use node local memory for the basic
    control structures of the buddy allocator, this special case needs to
    take into consideration a possible address width limit placed on the
    Xen heap. In turn this (but also other, more abstract considerations)
    requires that xenheap_max_mfn() not be called more than once (at most
    we might permit it to be called a second time with a larger value than
    was passed the first time), and be called only before calling
    end_boot_allocator().

    While inspecting all the involved code, a couple of off-by-one issues
    were found (and are being corrected here at once):
    - arch_init_memory() cleared one too many page table slots
    - the highmem_start based invocation of xenheap_max_mfn() passed too
      big a value
    - xenheap_max_mfn() calculated the wrong bit count in edge cases

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

    xen/arm64: do not (incorrectly) limit size of xenheap

    The commit 88e3ed61642bb393458acc7a9bd2f96edc337190 "x86/NUMA: make
    init_node_heap() respect Xen heap limit" breaks boot on the arm64 board
    X-Gene.

    The xenheap bits variable is used to know the last RAM MFN always mapped
    in Xen virtual memory. If the value is 0, it means that all the memory is
    always mapped in Xen virtual memory.

    On X-gene the RAM bank resides above 128GB and last xenheap MFN is
    0x4400000. With the new way to calculate the number of bits, xenheap_bits
    will be equal to 38 bits. This will result to hide all the RAM and the
    impossibility to allocate xenheap memory.

    Given that aarch64 have always all the memory mapped in Xen virtual
    memory, it's not necessary to call xenheap_max_mfn which set the number
    of bits.

    Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    master commit: 88e3ed61642bb393458acc7a9bd2f96edc337190
    master date: 2015-09-01 14:02:57 +0200
    master commit: 0a7167d9b20cdc48e6ea320fbbb920b3267c9757
    master date: 2015-09-04 14:58:07 +0100

commit ef372ac6ec1619491bb194c841d1b7405554a1c9
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Sep 10 15:35:11 2015 +0200

    x86/NUMA: don't account hotplug regions

    ... except in cases where they really matter: node_memblk_range[] now
    is the only place all regions get stored. nodes[] and NODE_DATA() track
    present memory only. This improves the reporting when nodes have
    disjoint "normal" and hotplug regions, with the hotplug region sitting
    above the highest populated page. In such cases a node's spanned-pages
    value (visible in both XEN_SYSCTL_numainfo and 'u' debug key output)
    covered all the way up to top of populated memory, giving quite
    different a picture from what an otherwise identically configured
    system without and hotplug regions would report. Note, however, that
    the actual hotplug case (as well as cases of nodes with multiple
    disjoint present regions) is still not being handled such that the
    reported values would represent how much memory a node really has (but
    that can be considered intentional).

    Reported-by: Jim Fehlig <jfehlig@xxxxxxxx>

    This at once makes nodes_cover_memory() no longer consider E820_RAM
    regions covered by SRAT hotplug regions.

    Also reject self-overlaps with mismatching hotplug flags.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Tested-by: Jim Fehlig <jfehlig@xxxxxxxx>
    master commit: c011f470e6e79208f5baa071b4d072b78c88e2ba
    master date: 2015-08-31 13:52:24 +0200

commit 8bdfe147851d7b73f41613966ba6fc8659d6a5b9
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Sep 10 15:34:26 2015 +0200

    x86/NUMA: fix setup_node()

    The function referenced an __initdata object (nodes_found). Since this
    being a node mask was more complicated than needed, the variable gets
    replaced by a simple counter. Check at once that the count of nodes
    doesn't go beyond MAX_NUMNODES.

    Also consolidate three printk()s related to the function's use into just
    one.

    Finally (quite the opposite of the above issue) __init-annotate
    nodes_cover_memory().

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 8f945d36d9bddd5b589ba23c7322b30d623dd084
    master date: 2015-08-31 13:51:52 +0200

commit 8933ed433693bff92e5f7c760bffcfff6a92614d
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Sep 10 15:33:45 2015 +0200

    IOMMU: skip domains without page tables when dumping

    Reported-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Tested-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    master commit: 5f335544cf5b716b0af51223e33373c4a7d65e8c
    master date: 2015-08-27 17:40:38 +0200

commit d46192366df9951378ddd4457311729cfd0668ca
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Sep 10 15:33:12 2015 +0200

    x86/IO-APIC: don't create pIRQ mapping from masked RTE

    While moving our XenoLinux patches to 4.2-rc I noticed bogus "already
    mapped" messages resulting from Linux (legitimately) writing RTEs with
    only the mask bit set. Clearly we shouldn't even attempt to create a
    pIRQ <-> IRQ mapping from such RTEs.

    In the course of this I also found that the respective message isn't
    really useful without also printing the pre-existing mapping. And I
    noticed that map_domain_pirq() allowed IRQ0 to get through, despite us
    never allowing a domain to control that interrupt.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 669d4b85c433674ab3b52ef707af0d3a551c941f
    master date: 2015-08-25 16:18:31 +0200

commit 5b7198822921e0c8a9c2ff140e3fe52a1b974844
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx>
Date:   Thu Sep 10 15:32:13 2015 +0200

    x86, amd_ucode: skip microcode updates for final levels

    Some of older[Fam10h] systems require that certain number of
    applied microcode patch levels should not be overwritten by
    the microcode loader. Otherwise, system hangs are known to occur.

    The 'final_levels' of patch ids have been obtained empirically.
    Refer bug https://bugzilla.suse.com/show_bug.cgi?id=913996
    for details of the issue.

    The short version is that people have predominantly noticed
    system hang issues when trying to update microcode levels
    beyond the patch IDs below.
    [0x01000098, 0x0100009f, 0x010000af]

    From internal discussions, we gathered that OS/hypervisor
    cannot reliably perform microcode updates beyond these levels
    due to hardware issues. Therefore, we need to abort microcode
    update process if we hit any of these levels.

    In this patch, we check for those microcode versions and abort
    if the current core has one of those final patch levels applied
    by the BIOS

    A linux version of the patch has already made it into tip-
    http://marc.info/?l=linux-kernel&m=143703405627170

    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
    master commit: 22c5675877c8209adcfdb6bceddb561320374529
    master date: 2015-08-25 16:17:13 +0200

commit fabd2cffef1eaf94159b941edb1dc05c8cf20597
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Thu Sep 10 15:31:30 2015 +0200

    mm: populate_physmap: validate correctly the gfn for direct mapped domain

    Direct mapped domain has already the memory allocated 1:1, so we are
    directly using the gfn as mfn to map the RAM in the guest.

    While we are validating that the page associated to the first mfn belongs to
    the domain, the subsequent MFN are not validated when the extent_order
    is > 0.

    This may result to map memory region (MMIO, RAM) which doesn't belong to the
    domain.

    Although, only DOM0 on ARM is using a direct memory mapped. So it
    doesn't affect any guest (at least on the upstream version) or even x86.

    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    master commit: 9503ab0e9c6a41a1ee7a70c8ea9313d08ebaa8c5
    master date: 2015-08-13 14:41:09 +0200

commit 9e6379ed39a53870a749230410ed9cfdd88348cf
Author: Anshul Makkar <anshul.makkar@xxxxxxxxxx>
Date:   Thu Sep 10 15:30:36 2015 +0200

    x86/mm: Make {hap, shadow}_teardown() preemptible

    A domain with sufficient shadow allocation can cause a watchdog timeout
    during domain destruction.  Expand the existing -ERESTART logic in
    paging_teardown() to allow {hap/sh}_set_allocation() to become
    restartable during the DOMCTL_destroydomain hypercall.

    Signed-off-by: Anshul Makkar <anshul.makkar@xxxxxxxxxx>
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    master commit: 0174da5b79752e2d5d6ca0faed89536e8f3d91c7
    master date: 2015-08-06 10:04:43 +0100

commit 12afed3c90a3f56cd3b6376992cfcb849aa8f3f9
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Sep 10 15:29:31 2015 +0200

    x86/gdt: Drop write-only, xalloc()'d array from set_gdt()

    It is not used, and can cause a spurious failure of the set_gdt() hypercall 
in
    low memory situations.

    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Reviewed-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    master commit: a7bd9b1661304500cd18b7d216d616ecf053ebdb
    master date: 2015-08-05 10:32:45 +0100
(qemu changes not included)

_______________________________________________
Osstest-output mailing list
Osstest-output@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.