[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [qemu-mainline baseline-only test] 44359: tolerable FAIL



This run is configured for baseline tests only.

flight 44359 qemu-mainline real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/44359/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1             fail like 44356
 test-amd64-amd64-qemuu-nested-intel 14 capture-logs/l1(14)     fail like 44356
 test-amd64-amd64-i386-pvgrub 10 guest-start                  fail   like 44356

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestore            fail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-xsm      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop             fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                53343338a6e7b83777b82803398572b40afc8c0f
baseline version:
 qemuu                8d0d9b9f67d6bdee9eaec1e8c1222ad91dc4ac01

Last test of basis    44356  2016-04-22 19:18:45 Z    1 days
Testing same since    44359  2016-04-23 15:00:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christoffer Dall <christoffer.dall@xxxxxxxxxx>
  Eric Blake <eblake@xxxxxxxxxx>
  Fam Zheng <famz@xxxxxxxxxx>
  Kevin Wolf <kwolf@xxxxxxxxxx>
  Peter Maydell <peter.maydell@xxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-armhf-armhf-xl-midway                                   pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            pass    


------------------------------------------------------------
sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
    http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.

------------------------------------------------------------
commit 53343338a6e7b83777b82803398572b40afc8c0f
Merge: ee1e0f8 ab27c3b
Author: Peter Maydell <peter.maydell@xxxxxxxxxx>
Date:   Fri Apr 22 16:17:12 2016 +0100

    Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
    
    Mirror block job fixes for 2.6.0-rc4
    
    # gpg: Signature made Fri 22 Apr 2016 15:46:41 BST using RSA key ID C88F2FD6
    # gpg: Good signature from "Kevin Wolf <kwolf@xxxxxxxxxx>"
    
    * remotes/kevin/tags/for-upstream:
      mirror: Workaround for unexpected iohandler events during completion
      aio-posix: Skip external nodes in aio_dispatch
      virtio: Mark host notifiers as external
      event-notifier: Add "is_external" parameter
      iohandler: Introduce iohandler_get_aio_context
    
    Signed-off-by: Peter Maydell <peter.maydell@xxxxxxxxxx>

commit ab27c3b5e7408693dde0b565f050aa55c4a1bcef
Author: Fam Zheng <famz@xxxxxxxxxx>
Date:   Fri Apr 22 21:53:56 2016 +0800

    mirror: Workaround for unexpected iohandler events during completion
    
    Commit 5a7e7a0ba moved mirror_exit to a BH handler but didn't add any
    protection against new requests that could sneak in just before the
    BH is dispatched. For example (assuming a code base at that commit):
    
            main_loop_wait # 1
              os_host_main_loop_wait
                g_main_context_dispatch
                  aio_ctx_dispatch
                    aio_dispatch
                      ...
                        mirror_run
                          bdrv_drain
        (a)               block_job_defer_to_main_loop
              qemu_iohandler_poll
                virtio_queue_host_notifier_read
                  ...
                    virtio_submit_multiwrite
        (b)           blk_aio_multiwrite
    
            main_loop_wait # 2
              <snip>
                    aio_dispatch
                      aio_bh_poll
        (c)             mirror_exit
    
    At (a) we know the BDS has no pending request. However, the same
    main_loop_wait call is going to dispatch iohandlers (EventNotifier
    events), which may lead to a new I/O from guest. So the invariant is
    already broken at (c). Data loss.
    
    Commit f3926945c8 made iohandler to use aio API.  The order of
    virtio_queue_host_notifier_read and block_job_defer_to_main_loop within
    a main_loop_wait becomes unpredictable, and even worse, if the host
    notifier event arrives at the next main_loop_wait call, the
    unpredictable order between mirror_exit and
    virtio_queue_host_notifier_read is also a trouble. As shown below, this
    commit made the bug easier to trigger:
    
        - Bug case 1:
    
            main_loop_wait # 1
              os_host_main_loop_wait
                g_main_context_dispatch
                  aio_ctx_dispatch (qemu_aio_context)
                    ...
                      mirror_run
                        bdrv_drain
        (a)             block_job_defer_to_main_loop
                  aio_ctx_dispatch (iohandler_ctx)
                    virtio_queue_host_notifier_read
                      ...
                        virtio_submit_multiwrite
        (b)               blk_aio_multiwrite
    
            main_loop_wait # 2
              ...
                    aio_dispatch
                      aio_bh_poll
        (c)             mirror_exit
    
        - Bug case 2:
    
            main_loop_wait # 1
              os_host_main_loop_wait
                g_main_context_dispatch
                  aio_ctx_dispatch (qemu_aio_context)
                    ...
                      mirror_run
                        bdrv_drain
        (a)             block_job_defer_to_main_loop
    
            main_loop_wait # 2
              ...
                aio_ctx_dispatch (iohandler_ctx)
                  virtio_queue_host_notifier_read
                    ...
                      virtio_submit_multiwrite
        (b)             blk_aio_multiwrite
                  aio_dispatch
                    aio_bh_poll
        (c)           mirror_exit
    
    In both cases, (b) breaks the invariant wanted by (a) and (c).
    
    Until then, the request loss has been silent. Later, 3f09bfbc7be added
    asserts at (c) to check the invariant (in
    bdrv_replace_in_backing_chain), and Max reported an assertion failure
    first visible there, by doing active committing while the guest is
    running bonnie++.
    
    2.5 added bdrv_drained_begin at (a) to protect the dataplane case from
    similar problems, but we never realize the main loop bug until now.
    
    As a bandage, this patch disables iohandler's external events
    temporarily together with bs->ctx.
    
    Launchpad Bug: 1570134
    
    Cc: qemu-stable@xxxxxxxxxx
    Signed-off-by: Fam Zheng <famz@xxxxxxxxxx>
    Reviewed-by: Jeff Cody <jcody@xxxxxxxxxx>
    Signed-off-by: Kevin Wolf <kwolf@xxxxxxxxxx>

commit 37989ced44e559dbb1edb8b238ffe221f70214b4
Author: Fam Zheng <famz@xxxxxxxxxx>
Date:   Fri Apr 22 21:53:55 2016 +0800

    aio-posix: Skip external nodes in aio_dispatch
    
    aio_poll doesn't poll the external nodes so this should never be true,
    but aio_ctx_dispatch may get notified by the events from GSource. To
    make bdrv_drained_begin effective in main loop, we should check the
    is_external flag here too.
    
    Also do the check in aio_pending so aio_dispatch is not called
    superfluously, when there is no events other than external ones.
    
    Signed-off-by: Fam Zheng <famz@xxxxxxxxxx>
    Reviewed-by: Jeff Cody <jcody@xxxxxxxxxx>
    Signed-off-by: Kevin Wolf <kwolf@xxxxxxxxxx>

commit 14560d69e7c979d97975c3aa6e7bd1ab3249fe88
Author: Fam Zheng <famz@xxxxxxxxxx>
Date:   Fri Apr 22 21:53:54 2016 +0800

    virtio: Mark host notifiers as external
    
    The effect of this change is the block layer drained section can work,
    for example when mirror job is being completed.
    
    Signed-off-by: Fam Zheng <famz@xxxxxxxxxx>
    Reviewed-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
    Signed-off-by: Kevin Wolf <kwolf@xxxxxxxxxx>

commit 54e18d35e44c48cf6e13c4ce09962c30b595b72a
Author: Fam Zheng <famz@xxxxxxxxxx>
Date:   Fri Apr 22 21:53:53 2016 +0800

    event-notifier: Add "is_external" parameter
    
    All callers pass "false" keeping the old semantics. The windows
    implementation doesn't distinguish the flag yet. On posix, it is passed
    down to the underlying aio context.
    
    Signed-off-by: Fam Zheng <famz@xxxxxxxxxx>
    Reviewed-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
    Signed-off-by: Kevin Wolf <kwolf@xxxxxxxxxx>

commit bcd82a968fbf7d8156eefbae3f3aab59ad576fa2
Author: Fam Zheng <famz@xxxxxxxxxx>
Date:   Fri Apr 22 21:53:52 2016 +0800

    iohandler: Introduce iohandler_get_aio_context
    
    Signed-off-by: Fam Zheng <famz@xxxxxxxxxx>
    Reviewed-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
    Signed-off-by: Kevin Wolf <kwolf@xxxxxxxxxx>

commit ee1e0f8e5d3682c561edcdceccff72b9d9b16d8b
Author: Christoffer Dall <christoffer.dall@xxxxxxxxxx>
Date:   Fri Apr 22 13:12:09 2016 +0200

    util: align memory allocations to 2M on AArch64
    
    For KVM to use Transparent Huge Pages (THP) we have to ensure that the
    alignment of the userspace address of the KVM memory slot and the IPA
    that the guest sees for a memory region have the same offset from the 2M
    huge page size boundary.
    
    One way to achieve this is to always align the IPA region at a 2M
    boundary and ensure that the mmap alignment is also at 2M.
    
    Unfortunately, we were only doing this for __arm__, not for __aarch64__,
    so add this simple condition.
    
    This fixes a performance regression using KVM/ARM on AArch64 platforms
    that showed a performance penalty of more than 50%, introduced by the
    following commit:
    
    9fac18f (oslib: allocate PROT_NONE pages on top of RAM, 2015-09-10)
    
    We were only lucky before the above commit, because we were allocating
    large regions and naturally getting a 2M alignment on those allocations
    then.
    
    Cc: qemu-stable@xxxxxxxxxx
    Reported-by: Shih-Wei Li <shihwei@xxxxxxxxxxxxxxx>
    Signed-off-by: Christoffer Dall <christoffer.dall@xxxxxxxxxx>
    Reviewed-by: Peter Maydell <peter.maydell@xxxxxxxxxx>
    [PMM: wrapped long line]
    Signed-off-by: Peter Maydell <peter.maydell@xxxxxxxxxx>

commit df7b97ff89319ccf392a16748081482a3d22b35a
Author: Eric Blake <eblake@xxxxxxxxxx>
Date:   Thu Apr 21 08:42:30 2016 -0600

    nbd: Don't mishandle unaligned client requests
    
    The NBD protocol does not (yet) force any alignment constraints
    on clients.  Even though qemu NBD clients always send requests
    that are aligned to 512 bytes, we must be prepared for non-qemu
    clients that don't care about alignment (even if it means they
    are less efficient).  Our use of blk_read() and blk_write() was
    silently operating on the wrong file offsets when the client
    made an unaligned request, corrupting the client's data (but
    as the client already has control over the file we are serving,
    I don't think it is a security hole, per se, just a data
    corruption bug).
    
    Note that in the case of NBD_CMD_READ, an unaligned length could
    cause us to return up to 511 bytes of uninitialized trailing
    garbage from blk_try_blockalign() - hopefully nothing sensitive
    from the heap's prior usage is ever leaked in that manner.
    
    Signed-off-by: Eric Blake <eblake@xxxxxxxxxx>
    Reviewed-by: Kevin Wolf <kwolf@xxxxxxxxxx>
    Reviewed-by: Fam Zheng <famz@xxxxxxxxxx>
    Tested-by: Kevin Wolf <kwolf@xxxxxxxxxx>
    Message-id: 1461249750-31928-1-git-send-email-eblake@xxxxxxxxxx
    Signed-off-by: Peter Maydell <peter.maydell@xxxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.