[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.5-testing bisection] complete test-amd64-amd64-xl-pvh-intel



branch xen-4.5-testing
xenbranch xen-4.5-testing
job test-amd64-amd64-xl-pvh-intel
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c421378a8d14c811e5467d535bc71adc0328a316
  Bug not present: b1f4e86aa3bd224bde62f18cf51381e6fe731a2f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/100337/


  commit c421378a8d14c811e5467d535bc71adc0328a316
  Author: George Dunlap <george.dunlap@xxxxxxxxxx>
  Date:   Fri Aug 5 14:07:27 2016 +0200
  
      xen: Have schedulers revise initial placement
      
      The generic domain creation logic in
      xen/common/domctl.c:default_vcpu0_location() attempts to try to do
      initial placement load-balancing by placing vcpu 0 on the least-busy
      non-primary hyperthread available.  Unfortunately, the logic can end
      up picking a pcpu that's not in the online mask.  When this is passed
      to a scheduler such which assumes that the initial assignment is
      valid, it causes a null pointer dereference looking up the runqueue.
      
      Furthermore, this initial placement doesn't take into account hard or
      soft affinity, or any scheduler-specific knowledge (such as historic
      runqueue load, as in credit2).
      
      To solve this, when inserting a vcpu, always call the per-scheduler
      "pick" function to revise the initial placement.  This will
      automatically take all knowledge the scheduler has into account.
      
      csched2_cpu_pick ASSERTs that the vcpu's pcpu scheduler lock has been
      taken.  Grab and release the lock to minimize time spend with irqs
      disabled.
      
      Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
      Reviewed-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
      Reviwed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
      master commit: 9f358ddd69463fa8fb65cf67beb5f6f0d3350e32
      master date: 2016-07-26 10:42:49 +0100


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.5-testing/test-amd64-amd64-xl-pvh-intel.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/xen-4.5-testing/test-amd64-amd64-xl-pvh-intel.xen-boot
 --summary-out=tmp/100337.bisection-summary --basis-template=99752 
--blessings=real,real-bisect xen-4.5-testing test-amd64-amd64-xl-pvh-intel 
xen-boot
Searching for failure / basis pass:
 99963 fail [host=godello1] / 99752 [host=godello0] 96516 ok.
Failure / basis pass flights: 99963 / 96516
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
c18c1456c48f23d9b31e7a32a21aa1ae9c53df93
Basis pass 44dd5e6b1cf505485d839bd62d47e29a36232d67 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
5e40cec825a2582d8a91119c485f5130cc2648e9 
eadd6636fae2abe1608207569e32c8457e37c653
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/linux-pvops.git#44dd5e6b1cf505485d839bd62d47e29a36232d67-da99423b3cd3e48c42c0d64b79aba58d828f9648
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#28c21388c2a32259cff37fc578684f994dca8c9f-28c21388c2a32259cff37fc578684f994dca8c9f
 
git://xenbits.xen.org/qemu-xen.git#5e40cec825a2582d8a91119c485f5130cc2648e9-835c204f1196ab8f5213a9dc5299ed76e748cdca
 
git://xenbits.xen.org/xen.git#eadd6636fae2abe1608207569e32c8457e37c653-c18c1456c48f23d9b31e7a32a21aa1ae9c53df93
From git://cache:9419/git://xenbits.xen.org/xen
   7fb0a87..7279829  staging    -> origin/staging
Loaded 3006 nodes in revision graph
Searching for test results:
 96511 pass 44dd5e6b1cf505485d839bd62d47e29a36232d67 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
5e40cec825a2582d8a91119c485f5130cc2648e9 
eadd6636fae2abe1608207569e32c8457e37c653
 96516 pass 44dd5e6b1cf505485d839bd62d47e29a36232d67 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
5e40cec825a2582d8a91119c485f5130cc2648e9 
eadd6636fae2abe1608207569e32c8457e37c653
 99752 [host=godello0]
 99963 fail da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
c18c1456c48f23d9b31e7a32a21aa1ae9c53df93
 100320 fail da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
c18c1456c48f23d9b31e7a32a21aa1ae9c53df93
 100316 pass 44dd5e6b1cf505485d839bd62d47e29a36232d67 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
5e40cec825a2582d8a91119c485f5130cc2648e9 
eadd6636fae2abe1608207569e32c8457e37c653
 100321 pass c0b9ae9b175c305bcff59d9505a7da8d204cc044 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
5e40cec825a2582d8a91119c485f5130cc2648e9 
c4c0312efaf8bd252ff06d55d6bf5b542a0a9421
 100323 pass e55e3853f21406d190b4eb54a928345446660aa0 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
5e40cec825a2582d8a91119c485f5130cc2648e9 
c4c0312efaf8bd252ff06d55d6bf5b542a0a9421
 100324 pass d5ec9cb62fa916687ef726f16c604f94350ff71d 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
5e40cec825a2582d8a91119c485f5130cc2648e9 
c4c0312efaf8bd252ff06d55d6bf5b542a0a9421
 100325 pass da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
cfcdeea1e6fc4ea3428693198878920c362bf923
 100327 fail da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
c421378a8d14c811e5467d535bc71adc0328a316
 100330 pass da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
b1f4e86aa3bd224bde62f18cf51381e6fe731a2f
 100332 fail da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
c421378a8d14c811e5467d535bc71adc0328a316
 100333 pass da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
b1f4e86aa3bd224bde62f18cf51381e6fe731a2f
 100335 fail da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
c421378a8d14c811e5467d535bc71adc0328a316
 100336 pass da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
b1f4e86aa3bd224bde62f18cf51381e6fe731a2f
 100337 fail da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
c421378a8d14c811e5467d535bc71adc0328a316
Searching for interesting versions
 Result found: flight 96511 (pass), for basis pass
 Result found: flight 99963 (fail), for basis failure
 Repro found: flight 100316 (pass), for basis pass
 Repro found: flight 100320 (fail), for basis failure
 0 revisions at da99423b3cd3e48c42c0d64b79aba58d828f9648 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
28c21388c2a32259cff37fc578684f994dca8c9f 
835c204f1196ab8f5213a9dc5299ed76e748cdca 
b1f4e86aa3bd224bde62f18cf51381e6fe731a2f
No revisions left to test, checking graph state.
 Result found: flight 100330 (pass), for last pass
 Result found: flight 100332 (fail), for first failure
 Repro found: flight 100333 (pass), for last pass
 Repro found: flight 100335 (fail), for first failure
 Repro found: flight 100336 (pass), for last pass
 Repro found: flight 100337 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c421378a8d14c811e5467d535bc71adc0328a316
  Bug not present: b1f4e86aa3bd224bde62f18cf51381e6fe731a2f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/100337/


  commit c421378a8d14c811e5467d535bc71adc0328a316
  Author: George Dunlap <george.dunlap@xxxxxxxxxx>
  Date:   Fri Aug 5 14:07:27 2016 +0200
  
      xen: Have schedulers revise initial placement
      
      The generic domain creation logic in
      xen/common/domctl.c:default_vcpu0_location() attempts to try to do
      initial placement load-balancing by placing vcpu 0 on the least-busy
      non-primary hyperthread available.  Unfortunately, the logic can end
      up picking a pcpu that's not in the online mask.  When this is passed
      to a scheduler such which assumes that the initial assignment is
      valid, it causes a null pointer dereference looking up the runqueue.
      
      Furthermore, this initial placement doesn't take into account hard or
      soft affinity, or any scheduler-specific knowledge (such as historic
      runqueue load, as in credit2).
      
      To solve this, when inserting a vcpu, always call the per-scheduler
      "pick" function to revise the initial placement.  This will
      automatically take all knowledge the scheduler has into account.
      
      csched2_cpu_pick ASSERTs that the vcpu's pcpu scheduler lock has been
      taken.  Grab and release the lock to minimize time spend with irqs
      disabled.
      
      Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
      Reviewed-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
      Reviwed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
      master commit: 9f358ddd69463fa8fb65cf67beb5f6f0d3350e32
      master date: 2016-07-26 10:42:49 +0100

Revision graph left in 
/home/logs/results/bisect/xen-4.5-testing/test-amd64-amd64-xl-pvh-intel.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
100337: tolerable ALL FAIL

flight 100337 xen-4.5-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/100337/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-pvh-intel  6 xen-boot               fail baseline untested


jobs:
 test-amd64-amd64-xl-pvh-intel                                fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.