[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-unstable-smoke test] 110057: regressions - trouble: blocked/broken/fail/pass



flight 110057 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110057/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   5 xen-build                fail REGR. vs. 110043

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl          15 guest-start/debian.repeat  fail pass in 110052

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f4a27a000d03e121eb1a36c485049a820c395539
baseline version:
 xen                  3d2010f9ffeacc8836811420460e15f2c1233695

Last test of basis   110043  2017-06-06 17:02:03 Z    0 days
Testing same since   110052  2017-06-06 22:48:29 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@xxxxxxxx>
  Julien Grall <julien.grall@xxxxxxx>
  Punit Agrawal <punit.agrawal@xxxxxxx>

jobs:
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      broken  
 test-amd64-amd64-xl-qemuu-debianhvm-i386                     blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f4a27a000d03e121eb1a36c485049a820c395539
Author: Julien Grall <julien.grall@xxxxxxx>
Date:   Tue May 23 18:03:36 2017 +0100

    xen/arm: Remove unused helpers access_ok and array_access_ok
    
    Both helpers access_ok and array_access_ok are not used on ARM. Remove
    them.
    
    Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
    Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>

commit 726b737574a3c075be95440e572b317a39293a9e
Author: Punit Agrawal <punit.agrawal@xxxxxxx>
Date:   Fri May 26 12:14:07 2017 +0100

    Avoid excess icache flushes in populate_physmap() before domain has been 
created
    
    populate_physmap() calls alloc_heap_pages() per requested
    extent. alloc_heap_pages() invalidates the entire icache per
    extent. During domain creation, the icache invalidations can be deffered
    until all the extents have been allocated as there is no risk of
    executing stale instructions from the icache.
    
    Introduce a new flag "MEMF_no_icache_flush" to be used to prevent
    alloc_heap_pages() from performing icache maintenance operations. Use
    the flag in populate_physmap() before the domain has been unpaused and
    perform required icache maintenance function at the end of the
    allocation.
    
    One concern is the lack of synchronisation around testing for
    "creation_finished". But it seems, in practice the window where it is
    out of sync should be small enough to not matter.
    
    Signed-off-by: Punit Agrawal <punit.agrawal@xxxxxxx>
    Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>

commit 1a0c3e3e28d6cd072734990efcaaec608bf152b1
Author: Punit Agrawal <punit.agrawal@xxxxxxx>
Date:   Fri May 26 12:14:06 2017 +0100

    arm: p2m: Prevent redundant icache flushes
    
    When toolstack requests flushing the caches, flush_page_to_ram() is
    called for each page of the requested domain. This needs to unnecessary
    icache invalidation operations.
    
    Let's take the responsibility of performing icache operations and use
    the recently introduced flag to prevent redundant icache operations by
    flush_page_to_ram().
    
    Signed-off-by: Punit Agrawal <punit.agrawal@xxxxxxx>
    Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>

commit 54b8651066e82f04db9d9e5b0cc02c26d39ae763
Author: Punit Agrawal <punit.agrawal@xxxxxxx>
Date:   Fri May 26 12:14:05 2017 +0100

    Allow control of icache invalidations when calling flush_page_to_ram()
    
    flush_page_to_ram() unconditionally drops the icache. In certain
    situations this leads to execessive icache flushes when
    flush_page_to_ram() ends up being repeatedly called in a loop.
    
    Introduce a parameter to allow callers of flush_page_to_ram() to take
    responsibility of synchronising the icache. This is in preparations for
    adding logic to make the callers perform the necessary icache
    maintenance operations.
    
    Signed-off-by: Punit Agrawal <punit.agrawal@xxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
(qemu changes not included)

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.