[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-unstable bisection] complete build-arm64-libvirt



branch xen-unstable
xenbranch xen-unstable
job build-arm64-libvirt
testid libvirt-build

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  66dd1c62b2a3c707bd5c55750d10a8223fbd577f
  Bug not present: f732240fd3bac25116151db5ddeb7203b62e85ce
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/172124/


  commit 66dd1c62b2a3c707bd5c55750d10a8223fbd577f
  Author: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
  Date:   Fri Jul 15 22:20:24 2022 +0300
  
      libxl: Add support for Virtio disk configuration
      
      This patch adds basic support for configuring and assisting virtio-mmio
      based virtio-disk backend (emulator) which is intended to run out of
      Qemu and could be run in any domain.
      Although the Virtio block device is quite different from traditional
      Xen PV block device (vbd) from the toolstack's point of view:
       - as the frontend is virtio-blk which is not a Xenbus driver, nothing
         written to Xenstore are fetched by the frontend currently ("vdev"
         is not passed to the frontend). But this might need to be revised
         in future, so frontend data might be written to Xenstore in order to
         support hotplugging virtio devices or passing the backend domain id
         on arch where the device-tree is not available.
       - the ring-ref/event-channel are not used for the backend<->frontend
         communication, the proposed IPC for Virtio is IOREQ/DM
      it is still a "block device" and ought to be integrated in existing
      "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
      logic to deal with Virtio devices as well.
      
      For the immediate purpose and an ability to extend that support for
      other use-cases in future (Qemu, virtio-pci, etc) perform the following
      actions:
      - Add new disk backend type (LIBXL_DISK_BACKEND_STANDALONE) and reflect
        that in the configuration
      - Introduce new disk "specification" and "transport" fields to struct
        libxl_device_disk. Both are written to the Xenstore. The transport
        field is only used for the specification "virtio" and it assumes
        only "mmio" value for now.
      - Introduce new "specification" option with "xen" communication
        protocol being default value.
      - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
        one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
      
      An example of domain configuration for Virtio disk:
      disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=standalone, 
specification=virtio']
      
      Nothing has changed for default Xen disk configuration.
      
      Please note, this patch is not enough for virtio-disk to work
      on Xen (Arm), as for every Virtio device (including disk) we need
      to allocate Virtio MMIO params (IRQ and memory region) and pass
      them to the backend, also update Guest device-tree. The subsequent
      patch will add these missing bits. For the current patch,
      the default "irq" and "base" are just written to the Xenstore.
      This is not an ideal splitting, but this way we avoid breaking
      the bisectability.
      
      Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
      Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
      Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>
      Tested-by: Jiamei Xie <jiamei.xie@xxxxxxx>


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/build-arm64-libvirt.libvirt-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/xen-unstable/build-arm64-libvirt.libvirt-build
 --summary-out=tmp/172124.bisection-summary --basis-template=172073 
--blessings=real,real-bisect,real-retry xen-unstable build-arm64-libvirt 
libvirt-build
Searching for failure / basis pass:
 172104 fail [host=rochester0] / 172073 ok.
Failure / basis pass flights: 172104 / 172073
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
4d96a4fe2ac08cc93f2e7eca56120792363cb950
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
f732240fd3bac25116151db5ddeb7203b62e85ce
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad
 
https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd
 
git://xenbits.xen.org/qemu-xen.git#b746458e1ce1bec85e58b458386f8b7a0bedfaa6-b746458e1ce1bec85e58b458386f8b7a0bedfaa6
 
git://xenbits.xen.org/xen.git#f732240fd3bac25116151db5ddeb7203b62e85ce-4d96a4fe2ac08\
 cc93f2e7eca56120792363cb950
Loaded 5001 nodes in revision graph
Searching for test results:
 171887 [host=laxton1]
 171896 [host=laxton1]
 171910 [host=rochester1]
 171933 [host=laxton1]
 171993 [host=laxton1]
 172058 [host=laxton1]
 172073 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
f732240fd3bac25116151db5ddeb7203b62e85ce
 172081 [host=rochester1]
 172089 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
4d96a4fe2ac08cc93f2e7eca56120792363cb950
 172105 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
f732240fd3bac25116151db5ddeb7203b62e85ce
 172106 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
4d96a4fe2ac08cc93f2e7eca56120792363cb950
 172112 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
124f138b37d595294b3100349e26ffb3f1df7b13
 172115 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
108e6f282d2c2b8442ac9e1487e6fd7865cd6ede
 172117 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
2128143c114c52c7536e37c32935fdd77f23edc1
 172118 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
66dd1c62b2a3c707bd5c55750d10a8223fbd577f
 172104 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
4d96a4fe2ac08cc93f2e7eca56120792363cb950
 172119 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
f732240fd3bac25116151db5ddeb7203b62e85ce
 172121 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
66dd1c62b2a3c707bd5c55750d10a8223fbd577f
 172122 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
f732240fd3bac25116151db5ddeb7203b62e85ce
 172124 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
66dd1c62b2a3c707bd5c55750d10a8223fbd577f
Searching for interesting versions
 Result found: flight 172073 (pass), for basis pass
 For basis failure, parent search stopping at 
2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
f732240fd3bac25116151db5ddeb7203b62e85ce, results HASH(0x563f80562a10) 
HASH(0x563f80565fa0) HASH(0x563f80564598) HASH(0x563f80578158) Result found: 
flight 172089 (fail), for basis failure (at ancestor ~386)
 Repro found: flight 172105 (pass), for basis pass
 Repro found: flight 172106 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 
27acf0ef828bf719b2053ba398b195829413dbdd 
b746458e1ce1bec85e58b458386f8b7a0bedfaa6 
f732240fd3bac25116151db5ddeb7203b62e85ce
No revisions left to test, checking graph state.
 Result found: flight 172073 (pass), for last pass
 Result found: flight 172118 (fail), for first failure
 Repro found: flight 172119 (pass), for last pass
 Repro found: flight 172121 (fail), for first failure
 Repro found: flight 172122 (pass), for last pass
 Repro found: flight 172124 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  66dd1c62b2a3c707bd5c55750d10a8223fbd577f
  Bug not present: f732240fd3bac25116151db5ddeb7203b62e85ce
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/172124/


  commit 66dd1c62b2a3c707bd5c55750d10a8223fbd577f
  Author: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
  Date:   Fri Jul 15 22:20:24 2022 +0300
  
      libxl: Add support for Virtio disk configuration
      
      This patch adds basic support for configuring and assisting virtio-mmio
      based virtio-disk backend (emulator) which is intended to run out of
      Qemu and could be run in any domain.
      Although the Virtio block device is quite different from traditional
      Xen PV block device (vbd) from the toolstack's point of view:
       - as the frontend is virtio-blk which is not a Xenbus driver, nothing
         written to Xenstore are fetched by the frontend currently ("vdev"
         is not passed to the frontend). But this might need to be revised
         in future, so frontend data might be written to Xenstore in order to
         support hotplugging virtio devices or passing the backend domain id
         on arch where the device-tree is not available.
       - the ring-ref/event-channel are not used for the backend<->frontend
         communication, the proposed IPC for Virtio is IOREQ/DM
      it is still a "block device" and ought to be integrated in existing
      "disk" handling. So, re-use (and adapt) "disk" parsing/configuration
      logic to deal with Virtio devices as well.
      
      For the immediate purpose and an ability to extend that support for
      other use-cases in future (Qemu, virtio-pci, etc) perform the following
      actions:
      - Add new disk backend type (LIBXL_DISK_BACKEND_STANDALONE) and reflect
        that in the configuration
      - Introduce new disk "specification" and "transport" fields to struct
        libxl_device_disk. Both are written to the Xenstore. The transport
        field is only used for the specification "virtio" and it assumes
        only "mmio" value for now.
      - Introduce new "specification" option with "xen" communication
        protocol being default value.
      - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
        one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model
      
      An example of domain configuration for Virtio disk:
      disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=standalone, 
specification=virtio']
      
      Nothing has changed for default Xen disk configuration.
      
      Please note, this patch is not enough for virtio-disk to work
      on Xen (Arm), as for every Virtio device (including disk) we need
      to allocate Virtio MMIO params (IRQ and memory region) and pass
      them to the backend, also update Guest device-tree. The subsequent
      patch will add these missing bits. For the current patch,
      the default "irq" and "base" are just written to the Xenstore.
      This is not an ideal splitting, but this way we avoid breaking
      the bisectability.
      
      Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
      Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
      Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>
      Tested-by: Jiamei Xie <jiamei.xie@xxxxxxx>

Revision graph left in 
/home/logs/results/bisect/xen-unstable/build-arm64-libvirt.libvirt-build.{dot,ps,png,html,svg}.
----------------------------------------
172124: tolerable ALL FAIL

flight 172124 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/172124/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build           fail baseline untested


jobs:
 build-arm64-libvirt                                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.