[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen-4.9-testing baseline-only test] 72498: regressions - trouble: broken/fail/pass
This run is configured for baseline tests only. flight 72498 xen-4.9-testing real [real] http://osstest.xs.citrite.net/~osstest/testlogs/logs/72498/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl <job status> broken test-amd64-amd64-xl 4 host-install(4) broken REGR. vs. 72487 test-amd64-amd64-xl-qemut-win7-amd64 7 xen-boot fail REGR. vs. 72487 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1 fail REGR. vs. 72487 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install fail REGR. vs. 72487 Regressions which are regarded as allowable (not blocking): test-amd64-amd64-xl-rtds 7 xen-boot fail REGR. vs. 72487 Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail blocked in 72487 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail like 72487 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 72487 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 72487 test-amd64-amd64-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install fail never pass test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-install fail never pass test-armhf-armhf-xl-credit2 13 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-check fail never pass test-armhf-armhf-xl 13 migrate-support-check fail never pass test-armhf-armhf-xl 14 saverestore-support-check fail never pass test-armhf-armhf-xl-midway 13 migrate-support-check fail never pass test-armhf-armhf-xl-midway 14 saverestore-support-check fail never pass test-armhf-armhf-libvirt 13 migrate-support-check fail never pass test-armhf-armhf-libvirt 14 saverestore-support-check fail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-check fail never pass test-armhf-armhf-libvirt-xsm 14 saverestore-support-check fail never pass test-armhf-armhf-xl-xsm 13 migrate-support-check fail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-check fail never pass test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass test-amd64-amd64-libvirt 13 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-check fail never pass test-amd64-i386-libvirt 13 migrate-support-check fail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-check fail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-check fail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-check fail never pass test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass version targeted for testing: xen 0a0dcdcd20e9711cbfb08db5b21af5299ee1eb8b baseline version: xen ae34ab8c5d2e977f6d8081c2ce4494875232f563 Last test of basis 72487 2017-11-24 00:21:10 Z 5 days Testing same since 72498 2017-11-29 01:49:15 Z 0 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: George Dunlap <george.dunlap@xxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Julien Grall <julien.grall@xxxxxxxxxx> jobs: build-amd64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-armhf-pvops pass build-i386-pvops pass build-amd64-rumprun pass build-i386-rumprun pass test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 pass test-amd64-amd64-xl broken test-armhf-armhf-xl pass test-amd64-i386-xl pass test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-xsm pass test-armhf-armhf-libvirt-xsm pass test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm pass test-armhf-armhf-xl-xsm pass test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-rumprun-amd64 pass test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-amd64-amd64-xl-qemut-ws16-amd64 fail test-amd64-i386-xl-qemut-ws16-amd64 fail test-amd64-amd64-xl-qemuu-ws16-amd64 fail test-amd64-i386-xl-qemuu-ws16-amd64 fail test-amd64-amd64-xl-credit2 pass test-armhf-armhf-xl-credit2 pass test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumprun-i386 pass test-amd64-amd64-xl-qemut-win10-i386 fail test-amd64-i386-xl-qemut-win10-i386 fail test-amd64-amd64-xl-qemuu-win10-i386 fail test-amd64-i386-xl-qemuu-win10-i386 fail test-amd64-amd64-qemuu-nested-intel fail test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt pass test-amd64-i386-libvirt pass test-amd64-amd64-livepatch pass test-amd64-i386-livepatch pass test-armhf-armhf-xl-midway pass test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade pass test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu pass test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub pass test-amd64-amd64-pygrub pass test-amd64-i386-libvirt-qcow2 pass test-amd64-amd64-xl-qcow2 pass test-armhf-armhf-libvirt-raw pass test-amd64-i386-xl-raw pass test-amd64-amd64-xl-rtds fail test-armhf-armhf-xl-rtds pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-xl-vhd pass ------------------------------------------------------------ sg-report-flight on osstest.xs.citrite.net logs: /home/osstest/logs images: /home/osstest/images Logs, config files, etc. are available at http://osstest.xs.citrite.net/~osstest/testlogs/logs Test harness code can be found at http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary broken-job test-amd64-amd64-xl broken broken-step test-amd64-amd64-xl host-install(4) Push not applicable. ------------------------------------------------------------ commit 0a0dcdcd20e9711cbfb08db5b21af5299ee1eb8b Author: George Dunlap <george.dunlap@xxxxxxxxxx> Date: Tue Nov 28 13:27:32 2017 +0100 p2m: Check return value of p2m_set_entry() when decreasing reservation If the entire range specified to p2m_pod_decrease_reservation() is marked populate-on-demand, then it will make a single p2m_set_entry() call, reducing its PoD entry count. Unfortunately, in the right circumstances, this p2m_set_entry() call may fail. It that case, repeated calls to decrease_reservation() may cause p2m->pod.entry_count to fall below zero, potentially tripping over BUG_ON()s to the contrary. Instead, check to see if the entry succeeded, and return false if not. The caller will then call guest_remove_page() on the gfns, which will return -EINVAL upon finding no valid memory there to return. Unfortunately if the order > 0, the entry may have partially changed. A domain_crash() is probably the safest thing in that case. Other p2m_set_entry() calls in the same function should be fine, because they are writing the entry at its current order. Nonetheless, check the return value and crash if our assumption turns otu to be wrong. This is part of XSA-247. Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: a3d64de8e86f5812917d2d0af28298f80debdf9a master date: 2017-11-28 13:13:26 +0100 commit fb51cab5b182da4fa5abe6eac6571471540b80db Author: George Dunlap <george.dunlap@xxxxxxxxxx> Date: Tue Nov 28 13:27:03 2017 +0100 p2m: Always check to see if removing a p2m entry actually worked The PoD zero-check functions speculatively remove memory from the p2m, then check to see if it's completely zeroed, before putting it in the cache. Unfortunately, the p2m_set_entry() calls may fail if the underlying pagetable structure needs to change and the domain has exhausted its p2m memory pool: for instance, if we're removing a 2MiB region out of a 1GiB entry (in the p2m_pod_zero_check_superpage() case), or a 4k region out of a 2MiB or larger entry (in the p2m_pod_zero_check() case); and the return value is not checked. The underlying mfn will then be added into the PoD cache, and at some point mapped into another location in the p2m. If the guest afterwards ballons out this memory, it will be freed to the hypervisor and potentially reused by another domain, in spite of the fact that the original domain still has writable mappings to it. There are several places where p2m_set_entry() shouldn't be able to fail, as it is guaranteed to write an entry of the same order that succeeded before. Add a backstop of crashing the domain just in case, and an ASSERT_UNREACHABLE() to flag up the broken assumption on debug builds. While we're here, use PAGE_ORDER_2M rather than a magic constant. This is part of XSA-247. Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 92790672dedf2eab042e04ecc277c19d40fd348a master date: 2017-11-28 13:13:03 +0100 commit 61c13eddc60ec9a66b861a6eaabd8bafa4f0700f Author: Julien Grall <julien.grall@xxxxxxxxxx> Date: Tue Nov 28 13:26:28 2017 +0100 x86/pod: prevent infinite loop when shattering large pages When populating pages, the PoD may need to split large ones using p2m_set_entry and request the caller to retry (see ept_get_entry for instance). p2m_set_entry may fail to shatter if it is not possible to allocate memory for the new page table. However, the error is not propagated resulting to the callers to retry infinitely the PoD. Prevent the infinite loop by return false when it is not possible to shatter the large mapping. This is XSA-246. Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx> master commit: a1c6c6768971ea387d7eba0803908ef0928b43ac master date: 2017-11-28 13:11:55 +0100 commit 52ad6515a2620ee7816f3f136d2de3edf3fd92b1 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Tue Nov 28 13:25:30 2017 +0100 update Xen version to 4.9.2-pre (qemu changes not included) _______________________________________________ osstest-output mailing list osstest-output@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/osstest-output
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |