[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [xen-4.7-testing test] 105998: regressions - trouble: blocked/broken/fail/pass
flight 105998 xen-4.7-testing real [real] http://logs.test-lab.xenproject.org/osstest/logs/105998/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-libvirt-pair 3 host-install/src_host(3) broken REGR. vs. 105855 test-amd64-i386-xl 3 host-install(3) broken REGR. vs. 105855 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 3 host-install(3) broken REGR. vs. 105855 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 3 host-install(3) broken REGR. vs. 105855 test-amd64-i386-migrupgrade 3 host-install/src_host(3) broken REGR. vs. 105855 test-amd64-amd64-xl-multivcpu 3 host-install(3) broken REGR. vs. 105855 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 6 xen-boot fail REGR. vs. 105855 test-armhf-armhf-xl-arndale 6 xen-boot fail REGR. vs. 105855 test-amd64-amd64-xl-credit2 17 guest-localmigrate/x10 fail REGR. vs. 105855 Regressions which are regarded as allowable (not blocking): test-amd64-amd64-xl-rtds 6 xen-boot fail REGR. vs. 105855 test-armhf-armhf-libvirt 13 saverestore-support-check fail like 105855 test-armhf-armhf-libvirt-xsm 13 saverestore-support-check fail like 105855 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 105855 test-armhf-armhf-libvirt-raw 12 saverestore-support-check fail like 105855 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 105855 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 105855 Tests which did not succeed, but are not blocking: test-arm64-arm64-libvirt-xsm 1 build-check(1) blocked n/a test-arm64-arm64-xl 1 build-check(1) blocked n/a build-arm64-libvirt 1 build-check(1) blocked n/a test-arm64-arm64-libvirt-qcow2 1 build-check(1) blocked n/a test-arm64-arm64-libvirt 1 build-check(1) blocked n/a test-arm64-arm64-xl-credit2 1 build-check(1) blocked n/a test-arm64-arm64-xl-rtds 1 build-check(1) blocked n/a test-arm64-arm64-xl-multivcpu 1 build-check(1) blocked n/a test-arm64-arm64-xl-xsm 1 build-check(1) blocked n/a build-arm64 5 xen-build fail never pass test-amd64-amd64-xl-pvh-intel 11 guest-start fail never pass test-amd64-amd64-xl-pvh-amd 11 guest-start fail never pass test-amd64-amd64-libvirt 12 migrate-support-check fail never pass test-amd64-i386-libvirt 12 migrate-support-check fail never pass test-amd64-i386-libvirt-xsm 12 migrate-support-check fail never pass test-amd64-amd64-libvirt-xsm 12 migrate-support-check fail never pass build-arm64-xsm 5 xen-build fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 11 migrate-support-check fail never pass build-arm64-pvops 5 kernel-build fail never pass test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl 12 migrate-support-check fail never pass test-armhf-armhf-xl 13 saverestore-support-check fail never pass test-armhf-armhf-xl-credit2 12 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 13 saverestore-support-check fail never pass test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail never pass test-armhf-armhf-xl-xsm 12 migrate-support-check fail never pass test-armhf-armhf-xl-xsm 13 saverestore-support-check fail never pass test-armhf-armhf-libvirt 12 migrate-support-check fail never pass test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail never pass test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail never pass test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail never pass test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass test-armhf-armhf-libvirt-raw 11 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 11 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 12 saverestore-support-check fail never pass test-armhf-armhf-xl-rtds 12 migrate-support-check fail never pass test-armhf-armhf-xl-rtds 13 saverestore-support-check fail never pass version targeted for testing: xen 500efc846cf79f99eef5cac680469efc91fea266 baseline version: xen 758378233b0b5d79a29735d95dc72410ef2f19aa Last test of basis 105855 2017-02-16 15:36:15 Z 7 days Failing since 105924 2017-02-20 15:11:38 Z 3 days 5 attempts Testing same since 105998 2017-02-22 18:58:42 Z 0 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Dario Faggioli <dario.faggioli@xxxxxxxxxx Dario Faggioli <dario.faggioli@xxxxxxxxxx> David Woodhouse <dwmw@xxxxxxxxxx> George Dunlap <george.dunlap@xxxxxxxxxx> Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Kevin Tian <kevin.tian@xxxxxxxxx> Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> Tamas K Lengyel <tamas@xxxxxxxxxxxxx> jobs: build-amd64-xsm pass build-arm64-xsm fail build-armhf-xsm pass build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-arm64 fail build-armhf pass build-i386 pass build-amd64-libvirt pass build-arm64-libvirt blocked build-armhf-libvirt pass build-i386-libvirt pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-arm64-pvops fail build-armhf-pvops pass build-i386-pvops pass build-amd64-rumprun pass build-i386-rumprun pass test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 pass test-amd64-amd64-xl pass test-arm64-arm64-xl blocked test-armhf-armhf-xl pass test-amd64-i386-xl broken test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm broken test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm fail test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm broken test-amd64-amd64-libvirt-xsm pass test-arm64-arm64-libvirt-xsm blocked test-armhf-armhf-libvirt-xsm pass test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm pass test-arm64-arm64-xl-xsm blocked test-armhf-armhf-xl-xsm pass test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-amd64-xl-pvh-amd fail test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-rumprun-amd64 pass test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-armhf-armhf-xl-arndale fail test-amd64-amd64-xl-credit2 fail test-arm64-arm64-xl-credit2 blocked test-armhf-armhf-xl-credit2 pass test-armhf-armhf-xl-cubietruck pass test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumprun-i386 pass test-amd64-amd64-qemuu-nested-intel pass test-amd64-amd64-xl-pvh-intel fail test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt pass test-arm64-arm64-libvirt blocked test-armhf-armhf-libvirt pass test-amd64-i386-libvirt pass test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade broken test-amd64-amd64-xl-multivcpu broken test-arm64-arm64-xl-multivcpu blocked test-armhf-armhf-xl-multivcpu pass test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair broken test-amd64-i386-libvirt-pair pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub pass test-amd64-amd64-pygrub pass test-arm64-arm64-libvirt-qcow2 blocked test-amd64-amd64-xl-qcow2 pass test-armhf-armhf-libvirt-raw pass test-amd64-i386-xl-raw pass test-amd64-amd64-xl-rtds fail test-arm64-arm64-xl-rtds blocked test-armhf-armhf-xl-rtds pass test-amd64-i386-xl-qemut-winxpsp3-vcpus1 pass test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-xl-vhd pass test-amd64-amd64-xl-qemut-winxpsp3 pass test-amd64-i386-xl-qemut-winxpsp3 pass test-amd64-amd64-xl-qemuu-winxpsp3 pass test-amd64-i386-xl-qemuu-winxpsp3 pass ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary broken-step test-amd64-amd64-libvirt-pair host-install/src_host(3) broken-step test-amd64-i386-xl host-install(3) broken-step test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm host-install(3) broken-step test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm host-install(3) broken-step test-amd64-i386-migrupgrade host-install/src_host(3) broken-step test-amd64-amd64-xl-multivcpu host-install(3) Not pushing. ------------------------------------------------------------ commit 500efc846cf79f99eef5cac680469efc91fea266 Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Date: Wed Feb 22 16:35:01 2017 +0000 QEMU_TAG update commit 8a9dfe392702cb987cab725fbda7345f4c3053da Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Mon Feb 20 16:02:47 2017 +0100 VMX: fix VMCS race on context-switch paths When __context_switch() is being bypassed during original context switch handling, the vCPU "owning" the VMCS partially loses control of it: It will appear non-running to remote CPUs, and hence their attempt to pause the owning vCPU will have no effect on it (as it already looks to be paused). At the same time the "owning" CPU will re-enable interrupts eventually (the lastest when entering the idle loop) and hence becomes subject to IPIs from other CPUs requesting access to the VMCS. As a result, when __context_switch() finally gets run, the CPU may no longer have the VMCS loaded, and hence any accesses to it would fail. Hence we may need to re-load the VMCS in vmx_ctxt_switch_from(). For consistency use the new function also in vmx_do_resume(), to avoid leaving an open-coded incarnation of it around. Reported-by: Kevin Mayer <Kevin.Mayer@xxxxxxxx> Reported-by: Anshul Makkar <anshul.makkar@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> Tested-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> master commit: 2f4d2198a9b3ba94c959330b5c94fe95917c364c master date: 2017-02-17 15:49:56 +0100 commit 19d4e55a01cdeafb6b14262806892fcd34bd205d Author: George Dunlap <george.dunlap@xxxxxxxxxx> Date: Mon Feb 20 16:02:12 2017 +0100 xen/p2m: Fix p2m_flush_table for non-nested cases Commit 71bb7304e7a7a35ea6df4b0cedebc35028e4c159 added flushing of nested p2m tables whenever the host p2m table changed. Unfortunately in the process, it added a filter to p2m_flush_table() function so that the p2m would only be flushed if it was being used as a nested p2m. This meant that the p2m was not being flushed at all for altp2m callers. Only check np2m_base if p2m_class for nested p2m's. NB that this is not a security issue: The only time this codepath is called is in cases where either nestedp2m or altp2m is enabled, and neither of them are in security support. Reported-by: Matt Leinhos <matt@xxxxxxxxxx> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> Tested-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx> master commit: 6192e6378e094094906950120470a621d5b2977c master date: 2017-02-15 17:15:56 +0000 commit ad19a5189d8a5b7d48c40cf62ff3682d24194ddf Author: David Woodhouse <dwmw@xxxxxxxxxx> Date: Mon Feb 20 16:01:47 2017 +0100 x86/ept: allow write-combining on !mfn_valid() MMIO mappings again For some MMIO regions, such as those high above RAM, mfn_valid() will return false. Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce consistent cachability of MMIO mappings"), guests have no longer been able to use PAT to obtain write-combining on such regions because the 'ignore PAT' bit is set in EPT. We probably want to err on the side of caution and preserve that behaviour for addresses in mmio_ro_ranges, but not for normal MMIO mappings. That necessitates a slight refactoring to check mfn_valid() later, and let the MMIO case get through to the right code path. Since we're not bailing out for !mfn_valid() immediately, the range checks need to be adjusted to cope Â? simply by masking in the low bits to account for 'order' instead of adding, to avoid overflow when the mfn is INVALID_MFN (which happens on unmap, since we carefully call this function to fill in the EMT even though the PTE won't be valid). The range checks are also slightly refactored to put only one of them in the fast path in the common case. If it doesn't overlap, then it *definitely* isn't contained, so we don't need both checks. And if it overlaps and is only one page, then it definitely *is* contained. Finally, add a comment clarifying how that 'return -1' works Â? it isn't returning an error and causing the mapping to fail; it relies on resolve_misconfig() being able to split the mapping later. So it's *only* sane to do it where order>0 and the 'problem' will be solved by splitting the large page. Not for blindly returning 'error', which I was tempted to do in my first attempt. Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> master commit: 30921dc2df3665ca1b2593595aa6725ff013d386 master date: 2017-02-07 14:30:01 +0100 commit 19addfac3c32d34eee51eb401d18dcd48f6d1298 Author: Dario Faggioli <dario.faggioli@xxxxxxxxxx> Date: Mon Feb 20 16:01:20 2017 +0100 xen: credit2: never consider CPUs outside of our cpupool. In fact, relying on the mask of what pCPUs belong to which Credit2 runqueue is not enough. If we only do that, when Credit2 is the boot scheduler, we may ASSERT() or panic when moving a pCPU from Pool-0 to another cpupool. This is because pCPUs outside of any pool are considered part of cpupool0. This puts us at risk of crash when those same pCPUs are added to another pool and something different than the idle domain is found to be running on them. Note that, even if we prevent the above to happen (which is the purpose of this patch), this is still pretty bad, in fact, when we remove a pCPU from Pool-0: - in Credit1, as we do *not* update prv->ncpus and prv->credit, which means we're considering the wrong total credits when doing accounting; - in Credit2, the pCPU remains part of one runqueue, and is hence at least considered during load balancing, even if no vCPU should really run there. In Credit1, this "only" causes skewed accounting and no crashes because there is a lot of `cpumask_and`ing going on with the cpumask of the domains' cpupool (which, BTW, comes at a price). A quick and not to involved (and easily backportable) solution for Credit2, is to do exactly the same. Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx> master commit: e7191920261d20e52ca4c06a03589a1155981b04 master date: 2017-01-24 17:02:07 +0000 commit d9dec4151a2ae2708c4b71f9e78257e5c874e6eb Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Mon Feb 20 16:00:20 2017 +0100 x86/VT-x: Dump VMCS on VMLAUNCH/VMRESUME failure If a VMLAUNCH/VMRESUME fails due to invalid control or host state, dump the VMCS before crashing the domain. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: d0fd9ae54491328b10dee4003656c14b3bf3d3e9 master date: 2016-07-04 10:51:48 +0100 (qemu changes not included) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |