[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.5-testing baseline-only test] 67663: regressions - FAIL



This run is configured for baseline tests only.

flight 67663 xen-4.5-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/67663/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     14 guest-saverestore         fail REGR. vs. 67600
 test-amd64-amd64-amd64-pvgrub 10 guest-start              fail REGR. vs. 67600
 test-amd64-amd64-libvirt-vhd 13 guest-saverestore         fail REGR. vs. 67600
 test-amd64-i386-xl-qemuu-winxpsp3 15 guest-localmigrate/x10 fail REGR. vs. 
67600

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-rumpuserxen-amd64 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail REGR. vs. 67600
 test-amd64-amd64-xl-rtds      6 xen-boot                     fail   like 67600
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1             fail like 67600
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail like 67600
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop             fail like 67600
 test-amd64-amd64-xl-qemut-winxpsp3  9 windows-install          fail like 67600

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 10 guest-start                  fail never pass
 test-armhf-armhf-libvirt-raw 10 guest-start                  fail   never pass
 test-armhf-armhf-xl-vhd      10 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop              fail never pass

version targeted for testing:
 xen                  d50078b9f2d7df55157ca353d889b13a8f3f0bc6
baseline version:
 xen                  462f714b1c776cad5b85132033fbf2f04d12d77c

Last test of basis    67600  2016-08-27 02:16:05 Z   11 days
Testing same since    67663  2016-09-06 22:19:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Dario Faggioli <dario.faggioli@xxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumpuserxen                                      pass    
 build-i386-rumpuserxen                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumpuserxen-amd64                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumpuserxen-i386                             pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-armhf-armhf-xl-midway                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-i386-xl-qemut-winxpsp3                            pass    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            fail    


------------------------------------------------------------
sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
    http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.

------------------------------------------------------------
commit d50078b9f2d7df55157ca353d889b13a8f3f0bc6
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Sep 6 12:13:43 2016 +0200

    memory: fix compat handling of XENMEM_access_op
    
    Within compat_memory_op() this needs to be placed in the first switch()
    statement, or it ends up being dead code (as that first switch() has a
    default case chaining to compat_arch_memory_op()).
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Tested-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 8d6af808a7e9d9ae1d129e1e5a0def7f8b2333ee
    master date: 2016-09-02 14:19:51 +0200

commit 42ea0590a5f6244b67b6292b1151b8bf7aaeed05
Author: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Date:   Tue Sep 6 12:13:18 2016 +0200

    credit1: fix a race when picking initial pCPU for a vCPU
    
    In the Credit1 hunk of 9f358ddd69463 ("xen: Have
    schedulers revise initial placement") csched_cpu_pick()
    is called without taking the runqueue lock of the
    (temporary) pCPU that the vCPU has been assigned to
    (e.g., in XEN_DOMCTL_max_vcpus).
    
    However, although 'hidden' in the IS_RUNQ_IDLE() macro,
    that function does access the runq (for doing load
    balancing calculations). Two scenarios are possible:
     1) we are on cpu X, and IS_RUNQ_IDLE() peeks at cpu's
        X own runq;
     2) we are on cpu X, but IS_RUNQ_IDLE() peeks at some
        other cpu's runq.
    
    Scenario 2) absolutely requies that the appropriate
    runq lock is taken. Scenario 1) works even without
    taking the cpu's own runq lock. That is actually what
    happens when when _csched_pick_cpu() is called from
    csched_vcpu_acct() (in turn, called by csched_tick()).
    
    Races have been observed and reported (by both XenServer
    own testing and OSSTest [1]), in the form of
    IS_RUNQ_IDLE() falling over LIST_POISON, because we're
    not currently holding the proper lock, in
    csched_vcpu_insert(), when scenario 1) occurs.
    
    However, for better robustness, from now on we always
    ask for the proper runq lock to be held when calling
    IS_RUNQ_IDLE() (which is also becoming a static inline
    function instead of macro).
    
    In order to comply with that, we take the lock around
    the call to _csched_cpu_pick() in csched_vcpu_acct().
    
    [1] https://lists.xen.org/archives/html/xen-devel/2016-08/msg02144.html
    
    Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 9109bf55084398c4547b8956906410c158eb9a17
    master date: 2016-09-02 14:17:55 +0200

commit 9e06b02bbf2f9264f782b686f6d454b54bbbf66a
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Sep 6 12:12:49 2016 +0200

    x86/32on64: misc adjustments to call gate emulation
    
    - There's no 32-bit displacement in 16-bit addressing mode.
    - It is wrong to ASSERT() anything on parts of an instruction fetched
      from guest memory.
    - The two scaling bits of a SIB byte don't affect whether there is a
      scaled index register or not.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: ee1cc4bfdca84d526805c4c72302c026f5e9cd94
    master date: 2016-09-01 15:23:46 +0200

commit e824aae1930d579c2925a9110f1f9c270062a206
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Tue Sep 6 12:11:53 2016 +0200

    xen: Remove buggy initial placement algorithm
    
    The initial placement algorithm sometimes picks cpus outside of the
    mask it's given, does a lot of unnecessary bitmasking, does its own
    separate load calculation, and completely ignores vcpu hard and soft
    affinities.  Just get rid of it and rely on the schedulers to do
    initial placement.
    
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Reviewed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: d5438accceecc8172db2d37d98b695eb8bc43afc
    master date: 2016-07-26 10:44:06 +0100

commit 2e56416a7a1c4c9c98452464f827dd792164c262
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Tue Sep 6 12:11:28 2016 +0200

    xen: Have schedulers revise initial placement
    
    The generic domain creation logic in
    xen/common/domctl.c:default_vcpu0_location() attempts to try to do
    initial placement load-balancing by placing vcpu 0 on the least-busy
    non-primary hyperthread available.  Unfortunately, the logic can end
    up picking a pcpu that's not in the online mask.  When this is passed
    to a scheduler such which assumes that the initial assignment is
    valid, it causes a null pointer dereference looking up the runqueue.
    
    Furthermore, this initial placement doesn't take into account hard or
    soft affinity, or any scheduler-specific knowledge (such as historic
    runqueue load, as in credit2).
    
    To solve this, when inserting a vcpu, always call the per-scheduler
    "pick" function to revise the initial placement.  This will
    automatically take all knowledge the scheduler has into account.
    
    csched2_cpu_pick ASSERTs that the vcpu's pcpu scheduler lock has been
    taken.  Grab and release the lock to minimize time spend with irqs
    disabled.
    
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Reviewed-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
    Reviwed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    master commit: 9f358ddd69463fa8fb65cf67beb5f6f0d3350e32
    master date: 2016-07-26 10:42:49 +0100

commit cda8e7e13f0abc2e7020ab658ea688bcc4c9a015
Author: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Date:   Tue Sep 6 12:10:40 2016 +0200

    sched: better handle (not) inserting idle vCPUs in runqueues
    
    Idle vCPUs are set to run immediately, as a part of their
    own initialization, so we shouldn't even try to put them
    in a runqueue. In fact, no scheduler does that, even when
    asked to (that is rather explicit in Credit2 and RTDS, a
    bit less evident in Credit1).
    
    Let's make things look as follows:
     - in generic code, explicitly avoid even trying to
       insert idle vCPUs in runqueues;
     - in specific schedulers' code, enforce that.
    
    Note that, as csched_vcpu_insert() is no longer being
    called, during boot (from sched_init_vcpu()) we can
    safely avoid saving the flags when taking the runqueue
    lock.
    
    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
    master commit: 6b53bb4ab3c9bd5eccde88a5175cf72589ba6d52
    master date: 2015-11-24 14:49:47 +0100
(qemu changes not included)

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.