[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.7-testing bisection] complete build-amd64



branch xen-4.7-testing
xenbranch xen-4.7-testing
job build-amd64
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  164c34dd23bc3ea8d5285752d9270627a93c91f5
  Bug not present: da743dc82adffd36ce2d71776f4ea5afbc186a15
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/110317/


  commit 164c34dd23bc3ea8d5285752d9270627a93c91f5
  Author: Jan Beulich <jbeulich@xxxxxxxx>
  Date:   Fri Jun 9 13:51:34 2017 +0200
  
      hvmloader: avoid tests when they would clobber used memory
      
      First of all limit the memory range used for testing to 4Mb: There's no
      point placing page tables right above 8Mb when they can equally well
      live at the bottom of the chunk at 4Mb - rep_io_test() cares about the
      5Mb...7Mb range only anyway. In a subsequent patch this will then also
      allow simply looking for an unused 4Mb range (instead of using a build
      time determined one).
      
      Extend the "skip tests" condition beyond the "is there enough memory"
      question.
      
      Reported-by: Charles Arnold <carnold@xxxxxxxx>
      Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
      Tested-by: Gary Lin <glin@xxxxxxxx>
      Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
      master commit: 0d6968635ce51a8ed7508ddcf17b3d13a462cb27
      master date: 2017-05-19 16:04:38 +0200


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.7-testing/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/xen-4.7-testing/build-amd64.xen-build 
--summary-out=tmp/110317.bisection-summary --basis-template=109620 
--blessings=real,real-bisect xen-4.7-testing build-amd64 xen-build
Searching for failure / basis pass:
 110244 fail [host=elbling0] / 109620 [host=pinot1] 109490 [host=chardonnay0] 
109054 [host=godello0] 109005 [host=godello1] 108212 [host=godello0] 108166 
[host=godello1] 108137 [host=godello0] 107333 [host=godello0] 107233 
[host=godello0] 107209 [host=godello0] 107021 [host=italia0] 106842 
[host=godello1] 106751 [host=godello0] 106661 [host=rimava0] 106540 
[host=rimava1] 106528 [host=godello0] 106251 [host=godello0] 106057 
[host=godello1] 105967 [host=godello1] 105948 [host=godello1] 105935 
[host=godello0] 105924 [host=godello0] 105855 [host=rimava1] 105819 
[host=nobling0] 105661 [host=godello0] 104551 [host=merlot0] 104303 
[host=godello1] 104275 [host=pinot1] 104250 [host=godello1] 103850 
[host=huxelrebe0] 103802 [host=huxelrebe0] 103419 [host=godello1] 103351 ok.
Failure / basis pass flights: 110244 / 103351
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
(tree in basispass but not in latest: qemu)
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
50d05123378d637897c77cd9e3254e6f0b3e1d23
Basis pass e27a2f17bc2d9d7f8afce2c5918f4f23937b268e 
7a71cea02afe2bf0f04f1cbf91931081dbe9dd76
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/qemu-xen.git#e27a2f17bc2d9d7f8afce2c5918f4f23937b268e-2583eaad3c4e6baebdac6800a26de1e10625b6bb
 
git://xenbits.xen.org/xen.git#7a71cea02afe2bf0f04f1cbf91931081dbe9dd76-50d05123378d637897c77cd9e3254e6f0b3e1d23
Loaded 2008 nodes in revision graph
Searching for test results:
 103270 [host=italia1]
 103351 pass e27a2f17bc2d9d7f8afce2c5918f4f23937b268e 
7a71cea02afe2bf0f04f1cbf91931081dbe9dd76
 103419 [host=godello1]
 103850 [host=huxelrebe0]
 103802 [host=huxelrebe0]
 104250 [host=godello1]
 104274 [host=godello1]
 104284 [host=pinot0]
 104275 [host=pinot1]
 104303 [host=godello1]
 104367 [host=godello1]
 104373 [host=merlot0]
 104374 [host=merlot0]
 104376 [host=merlot0]
 104370 [host=merlot0]
 104360 [host=merlot0]
 104377 [host=merlot0]
 104384 [host=merlot0]
 104382 [host=merlot0]
 104405 [host=merlot0]
 104403 [host=merlot0]
 104408 [host=merlot0]
 104410 [host=merlot0]
 104526 [host=merlot0]
 104418 [host=merlot0]
 104433 [host=merlot0]
 104438 [host=merlot0]
 104430 [host=merlot0]
 104474 [host=merlot0]
 104551 [host=merlot0]
 105661 [host=godello0]
 105819 [host=nobling0]
 105855 [host=rimava1]
 105948 [host=godello1]
 105924 [host=godello0]
 105934 [host=godello1]
 105940 [host=nobling0]
 105939 [host=huxelrebe0]
 105935 [host=godello0]
 106057 [host=godello1]
 105967 [host=godello1]
 106251 [host=godello0]
 106528 [host=godello0]
 106539 [host=godello0]
 106546 [host=godello0]
 106540 [host=rimava1]
 106661 [host=rimava0]
 106751 [host=godello0]
 106842 [host=godello1]
 107021 [host=italia0]
 107209 [host=godello0]
 107233 [host=godello0]
 107333 [host=godello0]
 108137 [host=godello0]
 108166 [host=godello1]
 108212 [host=godello0]
 109004 [host=godello1]
 109018 [host=godello1]
 109005 [host=godello1]
 109040 [host=chardonnay1]
 109054 [host=godello0]
 109490 [host=chardonnay0]
 109620 [host=pinot1]
 110185 fail 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
50d05123378d637897c77cd9e3254e6f0b3e1d23
 110244 fail 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
50d05123378d637897c77cd9e3254e6f0b3e1d23
 110303 pass 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
16f34b7a1903da359c013bd0fb1b80218434f3a1
 110317 fail 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
164c34dd23bc3ea8d5285752d9270627a93c91f5
 110305 pass 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
d8b8a100258127d6bc861219b0232322628c3a13
 110308 pass 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
a5f47620f7f13c4d57c2b664a391398049fb929d
 110309 pass 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
da743dc82adffd36ce2d71776f4ea5afbc186a15
 110297 pass e27a2f17bc2d9d7f8afce2c5918f4f23937b268e 
7a71cea02afe2bf0f04f1cbf91931081dbe9dd76
 110299 fail 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
50d05123378d637897c77cd9e3254e6f0b3e1d23
 110300 pass 15268f91fbe75b38a851c458aef74e693d646ea5 
c782e61edf16f4936aa2e8de79e14b11ef4cd690
 110302 pass 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
42ca46bcdc45342d250047482fafceca01dd57c6
 110310 fail 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
164c34dd23bc3ea8d5285752d9270627a93c91f5
 110314 pass 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
da743dc82adffd36ce2d71776f4ea5afbc186a15
 110315 fail 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
164c34dd23bc3ea8d5285752d9270627a93c91f5
 110316 pass 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
da743dc82adffd36ce2d71776f4ea5afbc186a15
Searching for interesting versions
 Result found: flight 103351 (pass), for basis pass
 Result found: flight 110185 (fail), for basis failure
 Repro found: flight 110297 (pass), for basis pass
 Repro found: flight 110299 (fail), for basis failure
 0 revisions at 2583eaad3c4e6baebdac6800a26de1e10625b6bb 
da743dc82adffd36ce2d71776f4ea5afbc186a15
No revisions left to test, checking graph state.
 Result found: flight 110309 (pass), for last pass
 Result found: flight 110310 (fail), for first failure
 Repro found: flight 110314 (pass), for last pass
 Repro found: flight 110315 (fail), for first failure
 Repro found: flight 110316 (pass), for last pass
 Repro found: flight 110317 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  164c34dd23bc3ea8d5285752d9270627a93c91f5
  Bug not present: da743dc82adffd36ce2d71776f4ea5afbc186a15
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/110317/


  commit 164c34dd23bc3ea8d5285752d9270627a93c91f5
  Author: Jan Beulich <jbeulich@xxxxxxxx>
  Date:   Fri Jun 9 13:51:34 2017 +0200
  
      hvmloader: avoid tests when they would clobber used memory
      
      First of all limit the memory range used for testing to 4Mb: There's no
      point placing page tables right above 8Mb when they can equally well
      live at the bottom of the chunk at 4Mb - rep_io_test() cares about the
      5Mb...7Mb range only anyway. In a subsequent patch this will then also
      allow simply looking for an unused 4Mb range (instead of using a build
      time determined one).
      
      Extend the "skip tests" condition beyond the "is there enough memory"
      question.
      
      Reported-by: Charles Arnold <carnold@xxxxxxxx>
      Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
      Tested-by: Gary Lin <glin@xxxxxxxx>
      Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
      master commit: 0d6968635ce51a8ed7508ddcf17b3d13a462cb27
      master date: 2017-05-19 16:04:38 +0200

pnmtopng: 235 colors found
Revision graph left in 
/home/logs/results/bisect/xen-4.7-testing/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
110317: tolerable ALL FAIL

flight 110317 xen-4.7-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/110317/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   5 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.