[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.9-testing test] 130890: regressions - FAIL



flight 130890 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/130890/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop        fail REGR. vs. 130212
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
130212

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore.2 fail in 130851 
pass in 130890
 test-amd64-amd64-xl-qemut-ws16-amd64 13 guest-saverestore  fail pass in 130851
 test-armhf-armhf-xl-rtds     12 guest-start                fail pass in 130851

Tests which did not succeed, but are not blocking:
 test-xtf-amd64-amd64-2       69 xtf/test-hvm64-xsa-278  fail blocked in 130212
 test-amd64-amd64-xl-qemut-ws16-amd64 14 guest-localmigrate fail in 130851 like 
130041
 test-armhf-armhf-xl-rtds    13 migrate-support-check fail in 130851 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 130851 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 129796
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 130041
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail like 130041
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 130212
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 130212
 test-amd64-amd64-xl-rtds     10 debian-install               fail  like 130212
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-install        fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install        fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install         fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install         fail never pass

version targeted for testing:
 xen                  7f01558d9b3fc4011741e9f469c96fd93dd8454e
baseline version:
 xen                  f13983db120f5e56dfefbee5d56678d2d43e2914

Last test of basis   130212  2018-11-16 16:19:59 Z   16 days
Failing since        130613  2018-11-20 15:07:39 Z   12 days    6 attempts
Testing same since   130745  2018-11-23 22:08:40 Z    9 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Roger Pau Monné <roger.pau@xxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          pass    
 build-i386-rumprun                                           pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 pass    
 test-amd64-amd64-xl-qemut-win10-i386                         fail    
 test-amd64-i386-xl-qemut-win10-i386                          fail    
 test-amd64-amd64-xl-qemuu-win10-i386                         fail    
 test-amd64-i386-xl-qemuu-win10-i386                          fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7f01558d9b3fc4011741e9f469c96fd93dd8454e
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Fri Nov 23 11:50:17 2018 +0100

    VMX: allow migration of guests with SSBD enabled
    
    The backport of cd53023df9 ("x86/msr: Virtualise MSR_SPEC_CTRL.SSBD for
    guests to use") did not mirror the PV side change into the HVM (VMX-
    specific) code path.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

commit e43f2ca943453f04383936727fa8f19827d5e596
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Tue Nov 20 15:52:13 2018 +0100

    x86/dom0: Fix shadowing of PV guests with 2M superpages
    
    This is a straight backport of c/s 28d9a9a2d41759b9e5163037b759ac557aea767c
    but with a different justification.
    
    Dom0 may have superpages (e.g. initial P2M), and may be shadowed
    (e.g. PV-L1TF).  Because of this incorrect check, when PV superpages are
    disallowed (which is the security supported configuration), attempting to
    shadow the P2M with its superpages still intact will fail.  A #PF will be
    handed back to the kernel, rather than the superpage being splintered and
    shadowed.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 0864dd81814f6f07957d85a1e9c9443e06bb7ee2
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Tue Nov 20 15:51:36 2018 +0100

    x86/dom0: Avoid using 1G superpages if shadowing may be necessary
    
    The shadow code doesn't support 1G superpages, and will hand #PF[RSVD] back 
to
    guests.
    
    For dom0's with 512GB of RAM or more (and subject to the P2M alignment), 
Xen's
    domain builder might use 1G superpages.
    
    Avoid using 1G superpages (falling back to 2M superpages instead) if there 
is
    a reasonable chance that we may have to shadow dom0.  This assumes that 
there
    are no circumstances where we will activate logdirty mode on dom0.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 96f6ee15ad7ca96472779fc5c083b4149495c584
    master date: 2018-11-12 11:26:04 +0000

commit ca5ede63978f79db910f638472ab51d35d703f27
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Nov 20 15:50:57 2018 +0100

    x86/shadow: shrink struct page_info's shadow_flags to 16 bits
    
    This is to avoid it overlapping the linear_pt_count field needed for PV
    domains. Introduce a separate, HVM-only pagetable_dying field to replace
    the sole one left in the upper 16 bits.
    
    Note that the accesses to ->shadow_flags in shadow_{pro,de}mote() get
    switched to non-atomic, non-bitops operations, as {test,set,clear}_bit()
    are not allowed on uint16_t fields and hence their use would have
    required ugly casts. This is fine because all updates of the field ought
    to occur with the paging lock held, and other updates of it use |= and
    &= as well (i.e. using atomic operations here didn't really guard
    against potentially racing updates elsewhere).
    
    This is part of XSA-280.
    
    Reported-by: Prgmr.com Security <security@xxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    master commit: 789589968ed90e82a832dbc60e958c76b787be7e
    master date: 2018-11-20 14:59:54 +0100

commit d96e6290c217631ff53190105e5e0a0b47c5b8c7
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Nov 20 15:50:13 2018 +0100

    x86/shadow: move OOS flag bit positions
    
    In preparation of reducing struct page_info's shadow_flags field to 16
    bits, lower the bit positions used for SHF_out_of_sync and
    SHF_oos_may_write.
    
    Instead of also adjusting the open coded use in _get_page_type(),
    introduce shadow_prepare_page_type_change() to contain knowledge of the
    bit positions to shadow code.
    
    This is part of XSA-280.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    master commit: d68e1070c3e8f4af7a31040f08bdd98e6d6eac1d
    master date: 2018-11-20 14:59:13 +0100

commit d819a65bbc3e68f38dde03ade764de9157605008
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Tue Nov 20 15:49:39 2018 +0100

    x86/mm: Don't perform flush after failing to update a guests L1e
    
    If the L1e update hasn't occured, the flush cannot do anything useful.  This
    skips the potentially expensive vcpumask_to_pcpumask() conversion, and
    broadcast TLB shootdown.
    
    More importantly however, we might be in the error path due to a bad va
    parameter from the guest, and this should not propagate into the TLB 
flushing
    logic.  The INVPCID instruction for example raises #GP for a non-canonical
    address.
    
    This is XSA-279.
    
    Reported-by: Matthew Daley <mattd@xxxxxxxxxxx>
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 6c8d50288722672ecc8e19b0741a31b521d01706
    master date: 2018-11-20 14:58:41 +0100

commit 15b4ee94bed702cb732e7fa4cbab33280a0965d8
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Nov 20 15:49:01 2018 +0100

    AMD/IOMMU: suppress PTE merging after initial table creation
    
    The logic is not fit for this purpose, so simply disable its use until
    it can be fixed / replaced. Note that this re-enables merging for the
    table creation case, which was disabled as a (perhaps unintended) side
    effect of the earlier "amd/iommu: fix flush checks". It relies on no
    page getting mapped more than once (with different properties) in this
    process, as that would still be beyond what the merging logic can cope
    with. But arch_iommu_populate_page_table() guarantees this afaict.
    
    This is part of XSA-275.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 937ef32565fa3a81fdb37b9dd5aa99a1b87afa75
    master date: 2018-11-20 14:55:14 +0100

commit f97a1d1375becd30d0541ba85caac4215340d0c4
Author: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Date:   Tue Nov 20 15:48:22 2018 +0100

    amd/iommu: fix flush checks
    
    Flush checking for AMD IOMMU didn't check whether the previous entry
    was present, or whether the flags (writable/readable) changed in order
    to decide whether a flush should be executed.
    
    Fix this by taking the writable/readable/next-level fields into account,
    together with the present bit.
    
    Along these lines the flushing in amd_iommu_map_page() must not be
    omitted for PV domains. The comment there was simply wrong: Mappings may
    very well change, both their addresses and their permissions. Ultimately
    this should honor iommu_dont_flush_iotlb, but to achieve this
    amd_iommu_ops first needs to gain an .iotlb_flush hook.
    
    Also make clear_iommu_pte_present() static, to demonstrate there's no
    caller omitting the (subsequent) flush.
    
    This is part of XSA-275.
    
    Reported-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 1a7ffe466cd057daaef245b0a1ab6b82588e4c01
    master date: 2018-11-20 14:52:12 +0100
(qemu changes not included)

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.