[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.8-testing baseline-only test] 75631: regressions - FAIL

This run is configured for baseline tests only.

flight 75631 xen-4.8-testing real [real]

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-install   fail REGR. vs. 75593
 test-amd64-i386-xl-raw       10 debian-di-install         fail REGR. vs. 75593

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install        fail like 75593
 test-armhf-armhf-xl-midway   12 guest-start                  fail   like 75593
 test-armhf-armhf-xl          12 guest-start                  fail   like 75593
 test-armhf-armhf-xl-multivcpu 12 guest-start                  fail  like 75593
 test-armhf-armhf-xl-credit2  12 guest-start                  fail   like 75593
 test-armhf-armhf-libvirt     12 guest-start                  fail   like 75593
 test-armhf-armhf-xl-rtds     12 guest-start                  fail   like 75593
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1             fail like 75593
 test-amd64-amd64-i386-pvgrub 10 debian-di-install            fail   like 75593
 test-armhf-armhf-xl-vhd      10 debian-di-install            fail   like 75593
 test-armhf-armhf-libvirt-raw 10 debian-di-install            fail   like 75593
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail like 75593
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail like 75593
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail like 75593
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install         fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install        fail never pass
 test-armhf-armhf-xl-credit1  12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win10-i386 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  090d47c927e91bb882952b4c141e3498cdf6e2a8
baseline version:
 xen                  d6798ce35707a485d9c132319d70dd654620e5e5

Last test of basis    75593  2018-11-14 13:49:42 Z   18 days
Testing same since    75631  2018-12-03 01:31:33 Z    0 days    1 attempts

People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Roger Pau Monné <roger.pau@xxxxxxxxxx>

 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          pass    
 build-i386-rumprun                                           pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 pass    
 test-amd64-amd64-xl-qemut-win10-i386                         fail    
 test-amd64-i386-xl-qemut-win10-i386                          fail    
 test-amd64-amd64-xl-qemuu-win10-i386                         fail    
 test-amd64-i386-xl-qemuu-win10-i386                          fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-armhf-armhf-xl-midway                                   fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    

sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at

Test harness code can be found at

Push not applicable.

commit 090d47c927e91bb882952b4c141e3498cdf6e2a8
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Nov 23 11:52:54 2018 +0100

    VMX: allow migration of guests with SSBD enabled
    The backport of cd53023df9 ("x86/msr: Virtualise MSR_SPEC_CTRL.SSBD for
    guests to use") did not mirror the PV side change into the HVM (VMX-
    specific) code path.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 70294dbe2ad3e50a110b20defe995994976c99c4
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Tue Nov 20 15:59:55 2018 +0100

    x86/dom0: Fix shadowing of PV guests with 2M superpages
    This is a minimal backport of pieces of:
     c/s 28d9a9a2d41759b9e5163037b759ac557aea767c
     c/s 4c5d78a10dc89427140a50a1df5a0b8e9f073e82
    to fix a PV shadowing problem which I hadn't anticipated at the time these
    fixes were first accepted.
    Having opt_allow_superpage disabled causes guest_supports_superpages() to
    return false for PV guests.  Returning false causes guest_walk_tables() to
    ignore L2 superpages, and read under them.
    This ignoring behaviour is correct for 2-level paging when CR4.PSE is clear,
    but isn't correct for 3- or 4-level paging.
    When opt_allow_superpage is clear, PV domU's can't have superpages, but dom0
    will still have its initial P2M constructed with 2M superpages.
    The end result is that, if dom0 becomes shadowed (e.g. PV-L1TF), the next
    memory access touching a P2M superpage will cause the shadow code to read
    under the P2M superpage and attempt to shadow junk.
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 88d77da6769b800ad98494f5e919a831dca8538c
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Tue Nov 20 15:59:17 2018 +0100

    x86/dom0: Avoid using 1G superpages if shadowing may be necessary
    The shadow code doesn't support 1G superpages, and will hand #PF[RSVD] back 
    For dom0's with 512GB of RAM or more (and subject to the P2M alignment), 
    domain builder might use 1G superpages.
    Avoid using 1G superpages (falling back to 2M superpages instead) if there 
    a reasonable chance that we may have to shadow dom0.  This assumes that 
    are no circumstances where we will activate logdirty mode on dom0.
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 96f6ee15ad7ca96472779fc5c083b4149495c584
    master date: 2018-11-12 11:26:04 +0000

commit 92f31182e0f7912885a4b9a4452c2a1dac91705e
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Nov 20 15:58:38 2018 +0100

    x86/shadow: shrink struct page_info's shadow_flags to 16 bits
    This is to avoid it overlapping the linear_pt_count field needed for PV
    domains. Introduce a separate, HVM-only pagetable_dying field to replace
    the sole one left in the upper 16 bits.
    Note that the accesses to ->shadow_flags in shadow_{pro,de}mote() get
    switched to non-atomic, non-bitops operations, as {test,set,clear}_bit()
    are not allowed on uint16_t fields and hence their use would have
    required ugly casts. This is fine because all updates of the field ought
    to occur with the paging lock held, and other updates of it use |= and
    &= as well (i.e. using atomic operations here didn't really guard
    against potentially racing updates elsewhere).
    This is part of XSA-280.
    Reported-by: Prgmr.com Security <security@xxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    master commit: 789589968ed90e82a832dbc60e958c76b787be7e
    master date: 2018-11-20 14:59:54 +0100

commit 4be61c4d9b32603ac21154abdfebfc44abf42fd7
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Nov 20 15:57:50 2018 +0100

    x86/shadow: move OOS flag bit positions
    In preparation of reducing struct page_info's shadow_flags field to 16
    bits, lower the bit positions used for SHF_out_of_sync and
    Instead of also adjusting the open coded use in _get_page_type(),
    introduce shadow_prepare_page_type_change() to contain knowledge of the
    bit positions to shadow code.
    This is part of XSA-280.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    master commit: d68e1070c3e8f4af7a31040f08bdd98e6d6eac1d
    master date: 2018-11-20 14:59:13 +0100

commit 538c7c754a53cb0b57a955cf5c1e09c318664f72
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Tue Nov 20 15:57:06 2018 +0100

    x86/mm: Don't perform flush after failing to update a guests L1e
    If the L1e update hasn't occured, the flush cannot do anything useful.  This
    skips the potentially expensive vcpumask_to_pcpumask() conversion, and
    broadcast TLB shootdown.
    More importantly however, we might be in the error path due to a bad va
    parameter from the guest, and this should not propagate into the TLB 
    logic.  The INVPCID instruction for example raises #GP for a non-canonical
    This is XSA-279.
    Reported-by: Matthew Daley <mattd@xxxxxxxxxxx>
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 6c8d50288722672ecc8e19b0741a31b521d01706
    master date: 2018-11-20 14:58:41 +0100

commit 14854d08a81e730f0fc13d756bc080db9dae6ae7
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Nov 20 15:56:29 2018 +0100

    AMD/IOMMU: suppress PTE merging after initial table creation
    The logic is not fit for this purpose, so simply disable its use until
    it can be fixed / replaced. Note that this re-enables merging for the
    table creation case, which was disabled as a (perhaps unintended) side
    effect of the earlier "amd/iommu: fix flush checks". It relies on no
    page getting mapped more than once (with different properties) in this
    process, as that would still be beyond what the merging logic can cope
    with. But arch_iommu_populate_page_table() guarantees this afaict.
    This is part of XSA-275.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 937ef32565fa3a81fdb37b9dd5aa99a1b87afa75
    master date: 2018-11-20 14:55:14 +0100

commit f030ad07534fa88f9f4bff48603bc5a83604f9e4
Author: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Date:   Tue Nov 20 15:55:51 2018 +0100

    amd/iommu: fix flush checks
    Flush checking for AMD IOMMU didn't check whether the previous entry
    was present, or whether the flags (writable/readable) changed in order
    to decide whether a flush should be executed.
    Fix this by taking the writable/readable/next-level fields into account,
    together with the present bit.
    Along these lines the flushing in amd_iommu_map_page() must not be
    omitted for PV domains. The comment there was simply wrong: Mappings may
    very well change, both their addresses and their permissions. Ultimately
    this should honor iommu_dont_flush_iotlb, but to achieve this
    amd_iommu_ops first needs to gain an .iotlb_flush hook.
    Also make clear_iommu_pte_present() static, to demonstrate there's no
    caller omitting the (subsequent) flush.
    This is part of XSA-275.
    Reported-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 1a7ffe466cd057daaef245b0a1ab6b82588e4c01
    master date: 2018-11-20 14:52:12 +0100
(qemu changes not included)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.