[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen-unstable baseline-only test] 72485: regressions - FAIL
This run is configured for baseline tests only. flight 72485 xen-unstable real [real] http://osstest.xs.citrite.net/~osstest/testlogs/logs/72485/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-pvhv2-amd 7 xen-boot fail REGR. vs. 72455 test-amd64-amd64-xl-qemuu-ws16-amd64 7 xen-boot fail REGR. vs. 72455 test-armhf-armhf-xl-xsm 12 guest-start fail REGR. vs. 72455 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail REGR. vs. 72455 Tests which did not succeed, but are not blocking: test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail blocked in 72455 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install fail like 72455 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail like 72455 test-armhf-armhf-libvirt-xsm 14 saverestore-support-check fail like 72455 test-armhf-armhf-libvirt 14 saverestore-support-check fail like 72455 test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail like 72455 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 72455 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 72455 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 72455 test-amd64-amd64-examine 4 memdisk-try-append fail never pass test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-armhf-armhf-xl-midway 13 migrate-support-check fail never pass test-armhf-armhf-xl-midway 14 saverestore-support-check fail never pass test-armhf-armhf-xl-credit2 13 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-check fail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-check fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-check fail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-check fail never pass test-armhf-armhf-libvirt 13 migrate-support-check fail never pass test-armhf-armhf-xl 13 migrate-support-check fail never pass test-armhf-armhf-xl 14 saverestore-support-check fail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass test-amd64-amd64-libvirt 13 migrate-support-check fail never pass test-amd64-i386-xl-qemut-ws16-amd64 10 windows-install fail never pass test-amd64-i386-libvirt 13 migrate-support-check fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass test-amd64-i386-xl-qemut-win10-i386 17 guest-stop fail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-win10-i386 17 guest-stop fail never pass version targeted for testing: xen d2f86bf604698806d311cc251c1b66fbb752673c baseline version: xen b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f Last test of basis 72455 2017-11-16 02:21:58 Z 7 days Testing same since 72485 2017-11-23 12:16:24 Z 0 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: Adrian Pop <apop@xxxxxxxxxxxxxxx> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Julien Grall <julien.grall@xxxxxxxxxx> Stefano Stabellini <sstabellini@xxxxxxxxxx> jobs: build-amd64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-armhf-pvops pass build-i386-pvops pass build-amd64-rumprun pass build-i386-rumprun pass test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 pass test-amd64-amd64-xl pass test-armhf-armhf-xl pass test-amd64-i386-xl pass test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-xsm pass test-armhf-armhf-libvirt-xsm pass test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm pass test-armhf-armhf-xl-xsm fail test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-amd64-xl-pvhv2-amd fail test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-rumprun-amd64 pass test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-amd64-amd64-xl-qemut-ws16-amd64 fail test-amd64-i386-xl-qemut-ws16-amd64 fail test-amd64-amd64-xl-qemuu-ws16-amd64 fail test-amd64-i386-xl-qemuu-ws16-amd64 fail test-amd64-amd64-xl-credit2 pass test-armhf-armhf-xl-credit2 pass test-amd64-amd64-examine pass test-armhf-armhf-examine pass test-amd64-i386-examine pass test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumprun-i386 pass test-amd64-amd64-xl-qemut-win10-i386 fail test-amd64-i386-xl-qemut-win10-i386 fail test-amd64-amd64-xl-qemuu-win10-i386 fail test-amd64-i386-xl-qemuu-win10-i386 fail test-amd64-amd64-qemuu-nested-intel fail test-amd64-amd64-xl-pvhv2-intel fail test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt pass test-amd64-i386-libvirt pass test-amd64-amd64-livepatch pass test-amd64-i386-livepatch pass test-armhf-armhf-xl-midway pass test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade pass test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu pass test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub pass test-amd64-amd64-pygrub pass test-amd64-i386-libvirt-qcow2 pass test-amd64-amd64-xl-qcow2 pass test-armhf-armhf-libvirt-raw pass test-amd64-i386-xl-raw pass test-amd64-amd64-xl-rtds pass test-armhf-armhf-xl-rtds pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-xl-vhd pass ------------------------------------------------------------ sg-report-flight on osstest.xs.citrite.net logs: /home/osstest/logs images: /home/osstest/images Logs, config files, etc. are available at http://osstest.xs.citrite.net/~osstest/testlogs/logs Test harness code can be found at http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary Push not applicable. ------------------------------------------------------------ commit d2f86bf604698806d311cc251c1b66fbb752673c Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Thu Nov 16 21:34:02 2017 +0000 x86/hvm: Don't corrupt the HVM context stream when writing the MSR record Ever since it was introduced in c/s bd1f0b45ff, hvm_save_cpu_msrs() has had a bug whereby it corrupts the HVM context stream if some, but fewer than the maximum number of MSRs are written. _hvm_init_entry() creates an hvm_save_descriptor with length for msr_count_max, but in the case that we write fewer than max, h->cur only moves forward by the amount of space used, causing the subsequent hvm_save_descriptor to be written within the bounds of the previous one. To resolve this, reduce the length reported by the descriptor to match the actual number of bytes used. A typical failure on the destination side looks like: (XEN) HVM4 restore: CPU_MSR 0 (XEN) HVM4.0 restore: not enough data left to read 56 MSR bytes (XEN) HVM4 restore: failed to load entry 20/0 Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Release-acked-by: Julien Grall <julien.grall@xxxxxxxxxx> commit f1a0a8c3fe2fb37c77ec1fe43618feef412427b5 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Thu Nov 16 21:10:00 2017 +0000 tools/libxc: Fix restoration of PV MSRs after migrate There are two bugs in process_vcpu_msrs() which clearly demonstrate that I didn't test this bit of Migration v2 very well when writing it... vcpu->msrsz is always expected to be a multiple of xen_domctl_vcpu_msr_t records in a spec-compliant stream, so the modulo yields 0 for the msr_count, rather than the actual number sent in the stream. Passing 0 for the msr_count causes the hypercall to exit early, and hides the fact that the guest handle is inserted into the wrong field in the domctl union. The reason that these bugs have gone unnoticed for so long is that the only MSRs passed like this for PV guests are the AMD DBGEXT MSRs, which only exist in fairly modern hardware, and whose use doesn't appear to be implemented in any contemporary PV guests. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Release-acked-by: Julien Grall <julien.grall@xxxxxxxxxx> commit eb0660c6950e08e44fdfeca3e29320382e2a1554 Author: Adrian Pop <apop@xxxxxxxxxxxxxxx> Date: Wed Nov 15 15:47:59 2017 +0200 x86/hvm: Fix altp2m_vcpu_enable_notify error handling The altp2m_vcpu_enable_notify subop handler might skip calling rcu_unlock_domain() after rcu_lock_current_domain(). Albeit since both rcu functions are no-ops when run on the current domain, this doesn't really have repercussions. The second change is adding a missing break that would have potentially enabled #VE for the current domain even if it had intended to enable it for another one (not a supported functionality). Signed-off-by: Adrian Pop <apop@xxxxxxxxxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Release-acked-by: Julien Grall <julien.grall@xxxxxxxxxx> commit d20daf4294adbdb9316850566013edb98db7bfbc Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Thu Nov 16 10:38:14 2017 +0100 x86/shadow: correct SH_LINEAR mapping detection in sh_guess_wrmap() The fix for XSA-243 / CVE-2017-15592 (c/s bf2b4eadcf379) introduced a change in behaviour for sh_guest_wrmap(), where it had to cope with no shadow linear mapping being present. As the name suggests, guest_vtable is a mapping of the guests pagetable, not Xen's pagetable, meaning that it isn't the pagetable we need to check for the shadow linear slot in. The practical upshot is that a shadow HVM vcpu which switches into 4-level paging mode, with an L4 pagetable that contains a mapping which aliases Xen's SH_LINEAR_PT_VIRT_START will fool the safety check for whether a SHADOW_LINEAR mapping is present. As the check passes (when it should have failed), Xen subsequently falls over the missing mapping with a pagefault such as: (XEN) Pagetable walk from ffff8140a0503880: (XEN) L4[0x102] = 000000046c218063 ffffffffffffffff (XEN) L3[0x102] = 000000046c218063 ffffffffffffffff (XEN) L2[0x102] = 000000046c218063 ffffffffffffffff (XEN) L1[0x103] = 0000000000000000 ffffffffffffffff This is part of XSA-243. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> commit 2c458dfcb59f3d9d8a35fc5ffbf780b6ed7a26a6 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Thu Nov 16 10:37:29 2017 +0100 x86: don't wrongly trigger linear page table assertion _put_page_type() may do multiple iterations until its cmpxchg() succeeds. It invokes set_tlbflush_timestamp() on the first iteration, however. Code inside the function takes care of this, but - the assertion in _put_final_page_type() would trigger on the second iteration if time stamps in a debug build are permitted to be sufficiently much wider than the default 6 bits (see WRAP_MASK in flushtlb.c), - it returning -EINTR (for a continuation to be scheduled) would leave the page inconsistent state (until the re-invocation completes). Make the set_tlbflush_timestamp() invocation conditional, bypassing it (for now) only in the case we really can't tolerate the stamp to be stored. This is part of XSA-240. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx> commit ca4b2e52a894845f26fc5b784f465e31c4cef90b Author: Julien Grall <julien.grall@xxxxxxxxxx> Date: Wed Nov 15 19:34:14 2017 +0000 xen/arm: p2m: Add more debug in get_page_from_gva The function get_page_from_gva is used by copy_*_guest helpers to translate a guest virtual address to a machine physical address and take reference on the page. There are a couple of errors paths that will return the same value making it difficult to know the exact error. Add more debug in each error patch only for debug-build. This should help narrowing down the intermittent failure with the hypercall GNTTABOP_copy (see [1]). [1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> Acked-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> commit 17a33f943ea93daa7e8c61faad6d1dfd16176761 Author: Julien Grall <julien.grall@xxxxxxxxxx> Date: Wed Nov 15 19:34:13 2017 +0000 xen/arm: mm: Change the return value of gvirt_to_maddr Currently, gvirt_to_maddr return -EFAULT when the translation failed. It might be useful to return the PAR_EL1 (Physical Address Register) in such a case to get a better idea of the reason. So modify the return value to use 0 on success or return the PAR on failure. The callers are modified to reflect the change of the return value. Note that with the change in gvirt_to_maddr, ma needs to be initialized to avoid GCC been confused (i.e value may be uninitialized) with the new construction. Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> Acked-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> (qemu changes not included) _______________________________________________ osstest-output mailing list osstest-output@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/osstest-output
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |