[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.5-testing test] 85360: regressions - FAIL



flight 85360 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/85360/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pygrub       9 debian-di-install         fail REGR. vs. 83135

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     15 guest-start/debian.repeat    fail   like 83003
 test-amd64-amd64-xl-rtds      6 xen-boot                     fail   like 83135
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop              fail like 83135
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop             fail like 83135
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop             fail like 83135
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop              fail like 83135

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      10 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 10 guest-start                  fail never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt-raw 10 guest-start                  fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  d165c490224da17c5dcaa2964fbcf59cd7dedc56
baseline version:
 xen                  fe71162ab965d4a3344bb867f88e967806c80af5

Last test of basis    83135  2016-02-19 06:43:29 Z   15 days
Failing since         84927  2016-03-01 13:45:33 Z    4 days    4 attempts
Testing same since    85360  2016-03-04 18:51:43 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Julien Grall <julien.grall@xxxxxxxxxx>
  Tim Deegan <tim@xxxxxxx>
  Wei Liu <wei.liu2@xxxxxxxxxx>

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumpuserxen                                      pass    
 build-i386-rumpuserxen                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumpuserxen-amd64                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumpuserxen-i386                             pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           pass    
 test-amd64-i386-xl-qemut-winxpsp3                            pass    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d165c490224da17c5dcaa2964fbcf59cd7dedc56
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Mar 4 13:16:07 2016 +0100

    x86emul: limit-check branch targets
    
    All branches need to #GP when their target violates the segment limit
    (in 16- and 32-bit modes) or is non-canonical (in 64-bit mode). For
    near branches facilitate this via a zero-byte instruction fetch from
    the target address (resulting in address translation and validation
    without an actual read from memory), while far branches get dealt with
    by breaking up the segment register loading into a read-and-validate
    part and a write one. The latter at once allows correcting some
    ordering issues in how the individual emulation steps get carried out:
    Before updating machine state, all exceptions unrelated to that state
    updating should have got raised (i.e. the only ones possibly resulting
    in partly updated state are faulting memory writes [pushes]).
    
    Note that while not immediately needed here, write and distinct read
    emulation routines get updated to deal with zero byte accesses too, for
    overall consistency.
    
    Reported-by: å??令 <liuling-it@xxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    master commit: 81d3a0b26c1672c60b2a54dd8780e6f6472d2328
    master date: 2016-02-26 12:14:39 +0100

commit 9ab5f84dfc48e8523e661110700b357e54149b1b
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Fri Mar 4 13:15:32 2016 +0100

    x86/hvm: print register state upon triple fault
    
    A sample looks like:
    
    (XEN) d1v0 Triple fault - invoking HVM shutdown action 1
    (XEN) *** Dumping Dom1 vcpu#0 state: ***
    (XEN) ----[ Xen-4.7-unstable  x86_64  debug=y  Not tainted ]----
    (XEN) CPU:    2
    (XEN) RIP:    0000:[<0000000000100005>]
    (XEN) RFLAGS: 0000000000010002   CONTEXT: hvm guest (d1v0)
    (XEN) rax: 0000000000000020   rbx: 0000000000000000   rcx: 0000000000000000
    (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
    (XEN) rbp: 0000000000000000   rsp: 0000000000000000   r8:  0000000000000000
    (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
    (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
    (XEN) r15: 0000000000000000   cr0: 0000000000000011   cr4: 0000000000000000
    (XEN) cr3: 0000000000000000   cr2: 0000000000000000
    (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 1329105943179b91da2466431cc972e223900ced
    master date: 2016-02-25 13:02:29 +0100

commit 4368db0d96ddcd020546105de86ad35bbe63a8c3
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Mar 4 13:14:39 2016 +0100

    x86emul: fix rIP handling
    
    Deal with rIP just like with any other register: Truncate to designated
    width upon entry, write back the zero-extended 32-bit value when
    emulating 32-bit code, and leave the upper 48 bits unchanged for 16-bit
    code.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 0640ffb67fb92e2561c63b9308c27b71281fdd72
    master date: 2016-02-18 15:05:34 +0100

commit a48c1d36a45f16848c41449e08b1483a3eb41d8a
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Fri Mar 4 13:13:22 2016 +0100

    xen/arm: vgic-v2: Implement correctly ITARGETSR0 - ITARGETSR7 read-only
    
    Each ITARGETSR register are 4-byte wide and the offset is in byte.
    
    The current implementation is computing the end of the range wrongly
    resulting to emulate only ITARGETSR{0,1} read-only. The rest will be
    treated as read-write.
    
    As 8 registers should be read-only, the end of the range should be
    ITARGETSR + (4 * 8) - 1.
    
    For convenience introduce ITARGETSR7 and ITARGETSR8.
    
    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Reviewed-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    (cherry picked from commit bc50de883847c1ebc7c8b4d73283d9be6c4df38e)

commit 86060f89ad4120a7f8b54bd6bbe055da2c2cf435
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Fri Mar 4 13:11:20 2016 +0100

    xen/arm: vgic-v2: Report the correct GICC size to the guest
    
    The GICv2 DT node is usually used by the guest to know the address/size
    of the regions (GICD, GICC...) to map into their virtual memory.
    
    While the GICv2 spec requires the size of the GICC to be 8KB, we
    correctly do an 8KB stage-2 mapping but erroneously report 256 in the
    device tree (based on GUEST_GICC_SIZE).
    
    I bet we didn't see any issue so far because all the registers except
    GICC_DIR lives in the first 256 bytes of the GICC region and all the
    guests I have seen so far are driving the GIC with GICC_CTLR.EIOmode =
    0.
    
    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    [ ijc -- fixed some typos in commit message ]
    
    (cherry picked from commit 8ee6d574b7073b5c98fcf94d20a53197609b85e1)

commit 812406cf2b6731d07f0f840d799fcfa5917dbaf4
Author: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date:   Thu Nov 5 14:46:12 2015 +0000

    tools: pygrub: if partition table is empty, try treating as a whole disk
    
    pygrub (in identify_disk_image()) detects a DOS style partition table
    via the presence of the 0xaa55 signature at the end of the first
    sector of the disk.
    
    However this signature is also present in whole-disk configurations
    when there is an MBR on the disk. Many filesystems (e.g. ext[234])
    include leading padding in their on disk format specifically to enable
    this.
    
    So if we think we have a DOS partition table but do not find any
    actual partition table entries we may as well try looking at it as a
    whole disk image. Worst case is we probe and find there isn't anything
    there.
    
    This was reported by Sjors Gielen in Debian bug #745419. The fix was
    inspired by a patch by Adi Kriegisch in
    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=745419#27
    
    Tested by genext2fs'ing my /boot into a new raw image (works) and
    then:
       dd if=/usr/lib/grub/i386-pc/g2ldr.mbr of=img conv=notrunc bs=512 count=1
    
    to add an MBR (with 0xaa55 signature) to it, which after this patch
    also works.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Cc: 745419-forwarded@xxxxxxxxxxxxxxx
    (cherry picked from commit fb31b1475f1bf179f033b8de3f0e173006fd77e9)
    (cherry picked from commit 6c9b1bcce4fcc872edddd44f88390a67d5954069)
(qemu changes not included)

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.