[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen-4.11-testing baseline-only test] 75625: regressions - FAIL
This run is configured for baseline tests only. flight 75625 xen-4.11-testing real [real] http://osstest.xensource.com/osstest/logs/75625/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1 fail REGR. vs. 75588 Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install fail like 75588 test-amd64-amd64-i386-pvgrub 19 guest-start/debian.repeat fail like 75588 test-amd64-i386-xl-raw 10 debian-di-install fail like 75588 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass test-amd64-i386-xl-pvshim 12 guest-start fail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-amd64-libvirt 13 migrate-support-check fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-i386-libvirt 13 migrate-support-check fail never pass test-amd64-amd64-xl-pvshim 12 guest-start fail never pass test-armhf-armhf-xl-rtds 12 guest-start fail never pass test-armhf-armhf-xl-midway 12 guest-start fail never pass test-armhf-armhf-xl-credit1 12 guest-start fail never pass test-armhf-armhf-xl-multivcpu 12 guest-start fail never pass test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install fail never pass test-armhf-armhf-xl-credit2 12 guest-start fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-armhf-armhf-libvirt 12 guest-start fail never pass test-armhf-armhf-xl 12 guest-start fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemut-win10-i386 17 guest-stop fail never pass test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-vhd 10 debian-di-install fail never pass test-armhf-armhf-libvirt-raw 10 debian-di-install fail never pass test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemuu-win10-i386 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass version targeted for testing: xen 49caabf2584a26d16f73b4bd423329f8d99f7e71 baseline version: xen dea9fc0e02d92f5e6d46680aa0a52fa758eca9c4 Last test of basis 75588 2018-11-11 18:23:28 Z 17 days Testing same since 75625 2018-11-29 00:19:25 Z 0 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Paul Durrant <paul.durrant@xxxxxxxxxx> Roger Pau Monné <roger.pau@xxxxxxxxxx> jobs: build-amd64-xsm pass build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-armhf-pvops pass build-i386-pvops pass build-amd64-rumprun pass build-i386-rumprun pass test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 pass test-amd64-amd64-xl pass test-armhf-armhf-xl fail test-amd64-i386-xl pass test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-xsm pass test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm pass test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-amd64-xl-pvhv2-amd pass test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-rumprun-amd64 pass test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-amd64-amd64-xl-qemut-ws16-amd64 fail test-amd64-i386-xl-qemut-ws16-amd64 fail test-amd64-amd64-xl-qemuu-ws16-amd64 fail test-amd64-i386-xl-qemuu-ws16-amd64 fail test-amd64-amd64-xl-credit1 pass test-armhf-armhf-xl-credit1 fail test-amd64-amd64-xl-credit2 pass test-armhf-armhf-xl-credit2 fail test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict fail test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict fail test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumprun-i386 pass test-amd64-amd64-xl-qemut-win10-i386 fail test-amd64-i386-xl-qemut-win10-i386 fail test-amd64-amd64-xl-qemuu-win10-i386 fail test-amd64-i386-xl-qemuu-win10-i386 fail test-amd64-amd64-qemuu-nested-intel fail test-amd64-amd64-xl-pvhv2-intel pass test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt fail test-amd64-i386-libvirt pass test-amd64-amd64-livepatch pass test-amd64-i386-livepatch pass test-armhf-armhf-xl-midway fail test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade pass test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu fail test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub fail test-amd64-amd64-xl-pvshim fail test-amd64-i386-xl-pvshim fail test-amd64-amd64-pygrub pass test-amd64-amd64-xl-qcow2 pass test-armhf-armhf-libvirt-raw fail test-amd64-i386-xl-raw fail test-amd64-amd64-xl-rtds pass test-armhf-armhf-xl-rtds fail test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow pass test-amd64-amd64-xl-shadow pass test-amd64-i386-xl-shadow pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-xl-vhd fail ------------------------------------------------------------ sg-report-flight on osstest.xs.citrite.net logs: /home/osstest/logs images: /home/osstest/images Logs, config files, etc. are available at http://osstest.xensource.com/osstest/logs Test harness code can be found at http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary Push not applicable. ------------------------------------------------------------ commit 49caabf2584a26d16f73b4bd423329f8d99f7e71 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Tue Nov 20 15:35:48 2018 +0100 x86/dom0: Avoid using 1G superpages if shadowing may be necessary The shadow code doesn't support 1G superpages, and will hand #PF[RSVD] back to guests. For dom0's with 512GB of RAM or more (and subject to the P2M alignment), Xen's domain builder might use 1G superpages. Avoid using 1G superpages (falling back to 2M superpages instead) if there is a reasonable chance that we may have to shadow dom0. This assumes that there are no circumstances where we will activate logdirty mode on dom0. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 96f6ee15ad7ca96472779fc5c083b4149495c584 master date: 2018-11-12 11:26:04 +0000 commit bbe48b5b67ccebbc73342bfd34603c4859cde4df Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Tue Nov 20 15:34:51 2018 +0100 x86/shadow: shrink struct page_info's shadow_flags to 16 bits This is to avoid it overlapping the linear_pt_count field needed for PV domains. Introduce a separate, HVM-only pagetable_dying field to replace the sole one left in the upper 16 bits. Note that the accesses to ->shadow_flags in shadow_{pro,de}mote() get switched to non-atomic, non-bitops operations, as {test,set,clear}_bit() are not allowed on uint16_t fields and hence their use would have required ugly casts. This is fine because all updates of the field ought to occur with the paging lock held, and other updates of it use |= and &= as well (i.e. using atomic operations here didn't really guard against potentially racing updates elsewhere). This is part of XSA-280. Reported-by: Prgmr.com Security <security@xxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> master commit: 789589968ed90e82a832dbc60e958c76b787be7e master date: 2018-11-20 14:59:54 +0100 commit 93177f1f0fe543e310098938eeabec6c2db14c27 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Tue Nov 20 15:34:13 2018 +0100 x86/shadow: move OOS flag bit positions In preparation of reducing struct page_info's shadow_flags field to 16 bits, lower the bit positions used for SHF_out_of_sync and SHF_oos_may_write. Instead of also adjusting the open coded use in _get_page_type(), introduce shadow_prepare_page_type_change() to contain knowledge of the bit positions to shadow code. This is part of XSA-280. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> master commit: d68e1070c3e8f4af7a31040f08bdd98e6d6eac1d master date: 2018-11-20 14:59:13 +0100 commit e738850aaf88f201997b5d05adf85dffb54c0c10 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Tue Nov 20 15:33:16 2018 +0100 x86/mm: Don't perform flush after failing to update a guests L1e If the L1e update hasn't occured, the flush cannot do anything useful. This skips the potentially expensive vcpumask_to_pcpumask() conversion, and broadcast TLB shootdown. More importantly however, we might be in the error path due to a bad va parameter from the guest, and this should not propagate into the TLB flushing logic. The INVPCID instruction for example raises #GP for a non-canonical address. This is XSA-279. Reported-by: Matthew Daley <mattd@xxxxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 6c8d50288722672ecc8e19b0741a31b521d01706 master date: 2018-11-20 14:58:41 +0100 commit eb6830a1c8347d0c5e33571f93cbd2d79330798d Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Tue Nov 20 15:32:34 2018 +0100 x86/mm: Put the gfn on all paths after get_gfn_query() c/s 7867181b2 "x86/PoD: correctly handle non-order-0 decrease-reservation requests" introduced an early exit in guest_remove_page() for unexpected p2m types. However, get_gfn_query() internally takes the p2m lock, and must be matched with a put_gfn() call later. Fix the erroneous comment beside the declaration of get_gfn_query(). This is XSA-277. Reported-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> master commit: d80988cfc04ee608bee722448e7c3bc8347ec04c master date: 2018-11-20 14:58:10 +0100 commit b88ccb3ae79decfa495ae965c02aeedc8fda2bcb Author: Paul Durrant <paul.durrant@xxxxxxxxxx> Date: Tue Nov 20 15:31:48 2018 +0100 x86/hvm/ioreq: use ref-counted target-assigned shared pages Passing MEMF_no_refcount to alloc_domheap_pages() will allocate, as expected, a page that is assigned to the specified domain but is not accounted for in tot_pages. Unfortunately there is no logic for tracking such allocations and avoiding any adjustment to tot_pages when the page is freed. The only caller of alloc_domheap_pages() that passes MEMF_no_refcount is hvm_alloc_ioreq_mfn() so this patch removes use of the flag from that call-site to avoid the possibility of a domain using an ioreq server as a means to adjust its tot_pages and hence allocate more memory than it should be able to. However, the reason for using the flag in the first place was to avoid the allocation failing if the emulator domain is already at its maximum memory limit. Hence this patch switches to allocating memory from the target domain instead of the emulator domain. There is already an extra memory allowance of 2MB (LIBXL_HVM_EXTRA_MEMORY) applied to HVM guests, which is sufficient to cover the pages required by the supported configuration of a single IOREQ server for QEMU. (Stub-domains do not, so far, use resource mapping). It also also the case the QEMU will have mapped the IOREQ server pages before the guest boots, hence it is not possible for the guest to inflate its balloon to consume these pages. Reported-by: Julien Grall <julien.grall@xxxxxxx> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> master commit: e862e6ceb1fd971d755a0c57d6a0f3b8065187dc master date: 2018-11-20 14:57:38 +0100 commit 3b2a779ccb9fd3c02ab2a68cb95a9628f0837029 Author: Paul Durrant <paul.durrant@xxxxxxxxxx> Date: Tue Nov 20 15:31:14 2018 +0100 x86/hvm/ioreq: fix page referencing The code does not take a page reference in hvm_alloc_ioreq_mfn(), only a type reference. This can lead to a situation where a malicious domain with XSM_DM_PRIV can engineer a sequence as follows: - create IOREQ server: no pages as yet. - acquire resource: page allocated, total 0. - decrease reservation: -1 ref, total -1. This will cause Xen to hit a BUG_ON() in free_domheap_pages(). This patch fixes the issue by changing the call to get_page_type() in hvm_alloc_ioreq_mfn() to a call to get_page_and_type(). This change in turn requires an extra put_page() in hvm_free_ioreq_mfn() in the case that _PGC_allocated is still set (i.e. a decrease reservation has not occurred) to avoid the page being leaked. This is part of XSA-276. Reported-by: Julien Grall <julien.grall@xxxxxxx> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: f6b6ae78679b363ff670a9c125077c436dabd608 master date: 2018-11-20 14:57:05 +0100 commit 946f345547b9810045e754ea4b73b4e8c5e7935b Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Tue Nov 20 15:30:25 2018 +0100 AMD/IOMMU: suppress PTE merging after initial table creation The logic is not fit for this purpose, so simply disable its use until it can be fixed / replaced. Note that this re-enables merging for the table creation case, which was disabled as a (perhaps unintended) side effect of the earlier "amd/iommu: fix flush checks". It relies on no page getting mapped more than once (with different properties) in this process, as that would still be beyond what the merging logic can cope with. But arch_iommu_populate_page_table() guarantees this afaict. This is part of XSA-275. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 937ef32565fa3a81fdb37b9dd5aa99a1b87afa75 master date: 2018-11-20 14:55:14 +0100 commit 086a9dded27eb39a74f1d51ca19c0e14a0cab277 Author: Roger Pau Monné <roger.pau@xxxxxxxxxx> Date: Tue Nov 20 15:29:40 2018 +0100 amd/iommu: fix flush checks Flush checking for AMD IOMMU didn't check whether the previous entry was present, or whether the flags (writable/readable) changed in order to decide whether a flush should be executed. Fix this by taking the writable/readable/next-level fields into account, together with the present bit. Along these lines the flushing in amd_iommu_map_page() must not be omitted for PV domains. The comment there was simply wrong: Mappings may very well change, both their addresses and their permissions. Ultimately this should honor iommu_dont_flush_iotlb, but to achieve this amd_iommu_ops first needs to gain an .iotlb_flush hook. Also make clear_iommu_pte_present() static, to demonstrate there's no caller omitting the (subsequent) flush. This is part of XSA-275. Reported-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 1a7ffe466cd057daaef245b0a1ab6b82588e4c01 master date: 2018-11-20 14:52:12 +0100 (qemu changes not included) _______________________________________________ osstest-output mailing list osstest-output@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/osstest-output
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |