[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [libvirt test] 62551: regressions - FAIL
flight 62551 libvirt real [real] http://logs.test-lab.xenproject.org/osstest/logs/62551/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-start/debianhvm.repeat fail REGR. vs. 62435 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-vhd 9 debian-di-install fail never pass test-armhf-armhf-libvirt-raw 9 debian-di-install fail never pass test-armhf-armhf-libvirt-qcow2 9 debian-di-install fail never pass test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail never pass test-armhf-armhf-libvirt-xsm 14 guest-saverestore fail never pass test-amd64-i386-libvirt-xsm 12 migrate-support-check fail never pass test-amd64-amd64-libvirt-xsm 12 migrate-support-check fail never pass test-amd64-i386-libvirt 12 migrate-support-check fail never pass test-armhf-armhf-libvirt 14 guest-saverestore fail never pass test-armhf-armhf-libvirt 12 migrate-support-check fail never pass test-amd64-amd64-libvirt-raw 11 migrate-support-check fail never pass test-amd64-i386-libvirt-raw 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-i386-libvirt-vhd 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qcow2 11 migrate-support-check fail never pass test-amd64-amd64-libvirt 12 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 11 migrate-support-check fail never pass version targeted for testing: libvirt 68572de8228e3971174a83c227fcb018d6f684c7 baseline version: libvirt 5e06a4f063dc6cf2ae14a361ddeb805d3f3ae440 Last test of basis 62435 2015-09-27 08:39:40 Z 5 days Testing same since 62551 2015-09-30 04:20:46 Z 2 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: Cole Robinson <crobinso@xxxxxxxxxx> Ján Tomko <jtomko@xxxxxxxxxx> Michal Privoznik <mprivozn@xxxxxxxxxx> jobs: build-amd64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-pvops pass build-armhf-pvops pass build-i386-pvops pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm fail test-amd64-amd64-libvirt-xsm pass test-armhf-armhf-libvirt-xsm fail test-amd64-i386-libvirt-xsm pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt fail test-amd64-i386-libvirt pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-libvirt-qcow2 pass test-armhf-armhf-libvirt-qcow2 fail test-amd64-i386-libvirt-qcow2 pass test-amd64-amd64-libvirt-raw pass test-armhf-armhf-libvirt-raw fail test-amd64-i386-libvirt-raw pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-libvirt-vhd fail test-amd64-i386-libvirt-vhd pass ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. ------------------------------------------------------------ commit 68572de8228e3971174a83c227fcb018d6f684c7 Author: Cole Robinson <crobinso@xxxxxxxxxx> Date: Mon Sep 28 19:47:09 2015 -0400 qemu: Fix dynamic_ownership qemu.conf setting Commit 307fb904 (Sep 10) added a 'privileged' variable when creating the DAC driver: @@ -153,6 +157,7 @@ virSecurityManagerNewDAC(const char *virtDriver, bool defaultConfined, bool requireConfined, bool dynamicOwnership, + bool privileged, virSecurityManagerDACChownCallback chownCallback) But argument order is mixed up at the caller, swapping dynamicOwnership and privileged values. This corrects the argument order https://bugzilla.redhat.com/show_bug.cgi?id=1266628 commit d72a8f7465896572b38521d1d3e82e7d36eb3f4e Author: Michal Privoznik <mprivozn@xxxxxxxxxx> Date: Thu Sep 24 18:00:06 2015 +0200 virsh: Preserve startupPolicy in change-media command https://bugzilla.redhat.com/show_bug.cgi?id=1250331 Even after my rework of startupPolicy handling, one command slipped my attention. The change-media command has a very unique approach to constructing disk XML. However, it will not preserve startupPolicy attribute. Signed-off-by: Michal Privoznik <mprivozn@xxxxxxxxxx> commit 1b5685dadaebdb77db42f78ab380801fbceb09bc Author: Ján Tomko <jtomko@xxxxxxxxxx> Date: Thu Sep 24 17:12:02 2015 +0200 Create a shallow copy for volume building only if supported Since the previous commit, the shallow copy is only used inside the if (backend->buildVol) if. commit 56a4e9cb613aff9cd6f828c0a9283fba55ac5951 Author: Ján Tomko <jtomko@xxxxxxxxxx> Date: Thu Sep 24 17:01:40 2015 +0200 Update pool allocation with new values on volume creation Since commit e0139e3, we update the pool allocation with the user-provided allocation values. For qcow2, the allocation is ignored for volume building, but we still subtracted it from pool's allocation. This can result in interesting values if the user-provided allocation is large enough: Capacity: 104.71 GiB Allocation: 109.13 GiB Available: 16.00 EiB We already do a VolRefresh on volume creation. Also refresh the volume after creating and use the new value to update the pool. https://bugzilla.redhat.com/show_bug.cgi?id=1163091 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |