[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.4-testing test] 62730: trouble: blocked/broken/fail/pass



flight 62730 xen-4.4-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/62730/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-raw       3 host-install(3)         broken REGR. vs. 62700

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-multivcpu 15 guest-start.2                fail  like 62616
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail like 62616
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-localmigrate      fail like 62665

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)               blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 build-amd64-rumpuserxen       6 xen-build                    fail   never pass
 test-armhf-armhf-libvirt-vhd  9 debian-di-install            fail   never pass
 build-i386-rumpuserxen        6 xen-build                    fail   never pass
 build-amd64-prev              5 xen-build                    fail   never pass
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install            fail never pass
 test-armhf-armhf-xl-vhd       9 debian-di-install            fail   never pass
 test-armhf-armhf-xl-qcow2     9 debian-di-install            fail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-install            fail   never pass
 build-i386-prev               5 xen-build                    fail   never pass
 test-armhf-armhf-libvirt     11 guest-start                  fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qcow2 11 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-raw 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-vhd  11 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-libvirt-raw  11 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 11 migrate-support-check        fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 21 leak-check/check        fail never pass

version targeted for testing:
 xen                  5c94f9630bf735f19df51c61817cfc6a3aebc994
baseline version:
 xen                  4d99a76cfeba6d23504121a51e7750f230128d85

Last test of basis    62700  2015-10-06 14:07:25 Z    3 days
Testing same since    62730  2015-10-08 11:12:57 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
  Dario Faggioli <dario.faggioli@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
  Kouya Shimura <kouya@xxxxxxxxxxxxxx>
  Quan Xu <quan.xu@xxxxxxxxx>
  Yang Zhang <yang.z.zhang@xxxxxxxxx>

jobs:
 build-amd64-xend                                             pass
 build-i386-xend                                              pass
 build-amd64                                                  pass
 build-armhf                                                  pass
 build-i386                                                   pass
 build-amd64-libvirt                                          pass
 build-armhf-libvirt                                          pass
 build-i386-libvirt                                           pass
 build-amd64-prev                                             fail
 build-i386-prev                                              fail
 build-amd64-pvops                                            pass
 build-armhf-pvops                                            pass
 build-i386-pvops                                             pass
 build-amd64-rumpuserxen                                      fail
 build-i386-rumpuserxen                                       fail
 test-amd64-amd64-xl                                          pass
 test-armhf-armhf-xl                                          pass
 test-amd64-i386-xl                                           pass
 test-amd64-i386-qemut-rhel6hvm-amd                           pass
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass
 test-amd64-i386-freebsd10-amd64                              pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass
 test-amd64-amd64-rumpuserxen-amd64                           blocked
 test-amd64-amd64-xl-qemut-win7-amd64                         fail
 test-amd64-i386-xl-qemut-win7-amd64                          fail
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail
 test-amd64-i386-xl-qemuu-win7-amd64                          fail
 test-armhf-armhf-xl-arndale                                  pass
 test-amd64-amd64-xl-credit2                                  pass
 test-armhf-armhf-xl-credit2                                  pass
 test-armhf-armhf-xl-cubietruck                               pass
 test-amd64-i386-freebsd10-i386                               pass
 test-amd64-i386-rumpuserxen-i386                             blocked
 test-amd64-i386-qemut-rhel6hvm-intel                         pass
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass
 test-amd64-amd64-libvirt                                     pass
 test-armhf-armhf-libvirt                                     fail
 test-amd64-i386-libvirt                                      pass
 test-amd64-amd64-migrupgrade                                 blocked
 test-amd64-i386-migrupgrade                                  blocked
 test-amd64-amd64-xl-multivcpu                                pass
 test-armhf-armhf-xl-multivcpu                                fail
 test-amd64-amd64-pair                                        pass
 test-amd64-i386-pair                                         pass
 test-amd64-amd64-libvirt-pair                                pass
 test-amd64-i386-libvirt-pair                                 pass
 test-amd64-amd64-pv                                          pass
 test-amd64-i386-pv                                           pass
 test-amd64-amd64-amd64-pvgrub                                pass
 test-amd64-amd64-i386-pvgrub                                 pass
 test-amd64-amd64-pygrub                                      pass
 test-amd64-amd64-libvirt-qcow2                               pass
 test-armhf-armhf-libvirt-qcow2                               fail
 test-amd64-i386-libvirt-qcow2                                pass
 test-amd64-amd64-xl-qcow2                                    pass
 test-armhf-armhf-xl-qcow2                                    fail
 test-amd64-i386-xl-qcow2                                     pass
 test-amd64-amd64-libvirt-raw                                 pass
 test-armhf-armhf-libvirt-raw                                 fail
 test-amd64-i386-libvirt-raw                                  pass
 test-amd64-amd64-xl-raw                                      pass
 test-armhf-armhf-xl-raw                                      broken
 test-amd64-i386-xl-raw                                       pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass
 test-amd64-amd64-libvirt-vhd                                 pass
 test-armhf-armhf-libvirt-vhd                                 fail
 test-amd64-i386-libvirt-vhd                                  pass
 test-amd64-amd64-xl-vhd                                      pass
 test-armhf-armhf-xl-vhd                                      fail
 test-amd64-i386-xl-vhd                                       pass
 test-amd64-i386-xend-qemut-winxpsp3                          fail
 test-amd64-amd64-xl-qemut-winxpsp3                           pass
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step test-armhf-armhf-xl-raw host-install(3)

Not pushing.

------------------------------------------------------------
commit 5c94f9630bf735f19df51c61817cfc6a3aebc994
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 8 12:46:39 2015 +0200

    x86/p2m-pt: correct condition of IOMMU mapping updates

    Whether the MFN changes does not depend on the new entry being valid
    (but solely on the old one).

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 660fd65d5578a95ec5eac522128bba23325179eb
    master date: 2015-10-02 13:40:36 +0200

commit 59670732ca1a90977b8a3636e1bd2ef23486e57e
Author: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Date:   Thu Oct 8 12:46:06 2015 +0200

    credit1: fix tickling when it happens from a remote pCPU

    especially if that is also from a different cpupool than the
    processor of the vCPU that triggered the tickling.

    In fact, it is possible that we get as far as calling vcpu_unblock()-->
    vcpu_wake()-->csched_vcpu_wake()-->__runq_tickle() for the vCPU 'vc',
    but all while running on a pCPU that is different from 'vc->processor'.

    For instance, this can happen when an HVM domain runs in a cpupool,
    with a different scheduler than the default one, and issues IOREQs
    to Dom0, running in Pool-0 with the default scheduler.
    In fact, right in this case, the following crash can be observed:

    (XEN) ----[ Xen-4.7-unstable  x86_64  debug=y  Tainted:    C ]----
    (XEN) CPU:    7
    (XEN) RIP:    e008:[<ffff82d0801230de>] __runq_tickle+0x18f/0x430
    (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor (d1v0)
    (XEN) rax: 0000000000000001   rbx: ffff8303184fee00   rcx: 0000000000000000
    (XEN) ... ... ...
    (XEN) Xen stack trace from rsp=ffff83031fa57a08:
    (XEN)    ffff82d0801fe664 ffff82d08033c820 0000000100000002 0000000a00000001
    (XEN)    0000000000006831 0000000000000000 0000000000000000 0000000000000000
    (XEN) ... ... ...
    (XEN) Xen call trace:
    (XEN)    [<ffff82d0801230de>] __runq_tickle+0x18f/0x430
    (XEN)    [<ffff82d08012348a>] csched_vcpu_wake+0x10b/0x110
    (XEN)    [<ffff82d08012b421>] vcpu_wake+0x20a/0x3ce
    (XEN)    [<ffff82d08012b91c>] vcpu_unblock+0x4b/0x4e
    (XEN)    [<ffff82d080167bd0>] vcpu_kick+0x17/0x61
    (XEN)    [<ffff82d080167c46>] vcpu_mark_events_pending+0x2c/0x2f
    (XEN)    [<ffff82d08010ac35>] evtchn_fifo_set_pending+0x381/0x3f6
    (XEN)    [<ffff82d08010a0f6>] notify_via_xen_event_channel+0xc9/0xd6
    (XEN)    [<ffff82d0801c29ed>] hvm_send_ioreq+0x3e9/0x441
    (XEN)    [<ffff82d0801bba7d>] hvmemul_do_io+0x23f/0x2d2
    (XEN)    [<ffff82d0801bbb43>] hvmemul_do_io_buffer+0x33/0x64
    (XEN)    [<ffff82d0801bc92b>] hvmemul_do_pio_buffer+0x35/0x37
    (XEN)    [<ffff82d0801cc49f>] handle_pio+0x58/0x14c
    (XEN)    [<ffff82d0801eabcb>] vmx_vmexit_handler+0x16b3/0x1bea
    (XEN)    [<ffff82d0801efd21>] vmx_asm_vmexit_handler+0x41/0xc0

    In this case, pCPU 7 is not in Pool-0, while the (Dom0's) vCPU being
    woken is. pCPU's 7 pool has a different scheduler than credit, but it
    is, however, right from pCPU 7 that we are waking the Dom0's vCPUs.
    Therefore, the current code tries to access csched_balance_mask for
    pCPU 7, but that is not defined, and hence the Oops.

    (Note that, in case the two pools run the same scheduler we see no
    Oops, but things are still conceptually wrong.)

    Cure things by making the csched_balance_mask macro accept a
    parameter for fetching a specific pCPU's mask (instead than always
    using smp_processor_id()).

    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: ea5637968a09a81a64fa5fd73ce49b4ea9789e12
    master date: 2015-09-30 14:44:22 +0200

commit 03f29a83fbee2b8f9e9c0c0c63da836a4411f1c9
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 8 12:45:34 2015 +0200

    x86/p2m-pt: ignore pt-share flag for shadow mode guests

    There is no page table sharing in shadow mode.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: c0a85795d864dd64c116af661bf676d66ddfd5fc
    master date: 2015-09-29 13:56:03 +0200

commit 7d17ce9b0a99cdb74fbbe9ac33aa0ce6e503aba1
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 8 12:45:08 2015 +0200

    x86/p2m-pt: delay freeing of intermediate page tables

    Old intermediate page tables must be freed only after IOMMU side
    updates/flushes have got carried out.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 960265fbd878cdc9841473b755e4ccc9eb1942d2
    master date: 2015-09-29 13:55:34 +0200

commit 2327dadbebf757b0aa6f45145fa0d8f4e065ed7c
Author: Quan Xu <quan.xu@xxxxxxxxx>
Date:   Thu Oct 8 12:44:27 2015 +0200

    vt-d: fix IM bit mask and unmask of Fault Event Control Register

    Bit 0:29 in Fault Event Control Register are 'Reserved and Preserved',
    software cannot write 0 to it unconditionally. Software must preserve
    the value read for writes.

    Signed-off-by: Quan Xu <quan.xu@xxxxxxxxx>
    Acked-by: Yang Zhang <yang.z.zhang@xxxxxxxxx>

    vt-d: fix IM bit unmask of Fault Event Control Register in init_vtd_hw()

    Bit 0:29 in Fault Event Control Register are 'Reserved and Preserved',
    software cannot write 0 to it unconditionally. Software must preserve
    the value read for writes.

    Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: Quan Xu <quan.xu@xxxxxxxxx>
    master commit: 86f3ff9fc4cc3cb69b96c1de74bcc51f738fe2b9
    master date: 2015-09-25 09:08:22 +0200
    master commit: 26b300bd727ef00a8f60329212a83c3b027a48f7
    master date: 2015-09-25 18:03:04 +0200

commit 964150bf9deb592b972b0a28741cbd5e88469c3d
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Thu Oct 8 12:43:53 2015 +0200

    xen/xsm: Make p->policyvers be a local variable (ver) to shut up GCC 5.1.1 
warnings.

    policydb.c: In function â??user_readâ??:
    policydb.c:1443:26: error: â??buf[2]â?? may be used uninitialized in this 
function [-Werror=maybe-uninitialized]
             usrdatum->bounds = le32_to_cpu(buf[2]);
                              ^
    cc1: all warnings being treated as errors

    Which (as Andrew mentioned) is because GCC cannot assume
    that 'p->policyvers' has the same value between checks.

    We make it local, optimize the name to 'ver' and the warnings go away.
    We also update another call site with this modification to
    make it more inline with the rest of the functions.

    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
    Acked-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
    master commit: 6a2f81459e1455d65a9a6f78dd2a0d0278619680
    master date: 2015-09-22 12:09:03 -0400

commit ef632a2d4c2e8d011ba747cef3722d9361739680
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Oct 8 12:43:18 2015 +0200

    x86/sysctl: don't clobber memory if NCAPINTS > ARRAY_SIZE(pi->hw_cap)

    There is no current problem, as both NCAPINTS and pi->hw_cap are 8 entries,
    but the limit should be calculated appropriately so as to avoid hypervisor
    stack corruption if the two do get out of sync.

    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: c373b912e74659f0e0898ae93e89513694cfd94e
    master date: 2015-09-16 11:22:00 +0200

commit 55d626318e9047cd38e1252906e1fbca123a5068
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 8 12:42:50 2015 +0200

    x86/MSI: fail if no hardware support

    This is to guard against buggy callers (luckily Dom0 only) invoking
    the respective hypercall for a device not being MSI-capable.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: c7d5d5d8ea1ecbd6ef8b47dace4dec825f0f6e48
    master date: 2015-09-16 11:20:27 +0200

commit c4af95f46cefcb7a10cdaf72e0222bab2f290ae7
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 8 12:42:16 2015 +0200

    x86/p2m: fix mismatched unlock

    Luckily, due to gfn_unlock() currently mapping to p2m_unlock(), this is
    only a cosmetic issue right now.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 1f180822ad3fe83fe293393ec175f14ded98f082
    master date: 2015-09-14 13:39:19 +0200

commit fbb3881ace18434b9490b74c067fd67bd9d681a5
Author: Kouya Shimura <kouya@xxxxxxxxxxxxxx>
Date:   Thu Oct 8 12:41:22 2015 +0200

    x86/hvm: fix saved pmtimer and hpet values

    The ACPI PM timer is sometimes broken on live migration.
    Since vcpu->arch.hvm_vcpu.guest_time is always zero in other than
    "delay for missed ticks mode". Even in "delay for missed ticks mode",
    vcpu's guest_time field is not valid (i.e. zero) when
    the state of vcpu is "blocked". (see pt_save_timer function)

    The original author (Tim Deegan) of pmtimer_save() must have intended
    that it saves the last scheduled time of the vcpu. Unfortunately it was
    already implied this bug. FYI, there is no other timer mode than
    "delay for missed ticks mode" then.

    For consistency with HPET, pmtimer_save() should refer hvm_get_guest_time()
    to update the counter as well as hpet_save() does.

    Without this patch, the clock of windows server 2012R2 without HPET
    might leap forward several minutes on live migration.

    Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>

    Retain use of ->arch.hvm_vcpu.guest_time when non-zero. Do the inverse
    adjustment for vHPET.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    Reviewed-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>
    master commit: 244582a01dcb49fa30083725964a066937cc94f2
    master date: 2015-09-11 16:24:56 +0200
========================================

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.