[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [xen-unstable test] 101231: regressions - FAIL
flight 101231 xen-unstable real [real] http://logs.test-lab.xenproject.org/osstest/logs/101231/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-armhf-armhf-xl-vhd 9 debian-di-install fail REGR. vs. 101228 Regressions which are regarded as allowable (not blocking): test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 101228 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 101228 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail like 101228 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 101228 test-amd64-amd64-xl-rtds 9 debian-install fail like 101228 Tests which did not succeed, but are not blocking: test-amd64-amd64-rumprun-amd64 1 build-check(1) blocked n/a test-amd64-i386-rumprun-i386 1 build-check(1) blocked n/a build-amd64-rumprun 5 rumprun-build fail never pass test-armhf-armhf-libvirt-xsm 12 migrate-support-check fail never pass test-armhf-armhf-libvirt-xsm 14 guest-saverestore fail never pass build-i386-rumprun 5 rumprun-build fail never pass test-armhf-armhf-libvirt 12 migrate-support-check fail never pass test-armhf-armhf-libvirt 14 guest-saverestore fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-armhf-armhf-libvirt-qcow2 11 migrate-support-check fail never pass test-armhf-armhf-libvirt-qcow2 13 guest-saverestore fail never pass test-armhf-armhf-xl 12 migrate-support-check fail never pass test-armhf-armhf-xl 13 saverestore-support-check fail never pass test-armhf-armhf-xl-credit2 12 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 13 saverestore-support-check fail never pass test-armhf-armhf-xl-arndale 12 migrate-support-check fail never pass test-armhf-armhf-xl-arndale 13 saverestore-support-check fail never pass test-armhf-armhf-libvirt-raw 11 migrate-support-check fail never pass test-armhf-armhf-libvirt-raw 13 guest-saverestore fail never pass test-armhf-armhf-xl-rtds 12 migrate-support-check fail never pass test-armhf-armhf-xl-rtds 13 saverestore-support-check fail never pass test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail never pass test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail never pass test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail never pass test-armhf-armhf-xl-xsm 12 migrate-support-check fail never pass test-armhf-armhf-xl-xsm 13 saverestore-support-check fail never pass test-amd64-amd64-libvirt-vhd 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-xsm 12 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check fail never pass test-amd64-amd64-libvirt 12 migrate-support-check fail never pass test-amd64-amd64-xl-pvh-amd 11 guest-start fail never pass test-amd64-i386-libvirt-xsm 12 migrate-support-check fail never pass test-amd64-i386-libvirt 12 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2 fail never pass test-amd64-amd64-xl-pvh-intel 11 guest-start fail never pass version targeted for testing: xen b3d54cead6459567d9786ad415149868ee7f2f5b baseline version: xen 4b997d7aa8f82b5f7b0757bb6b73546dc98714a3 Last test of basis 101228 2016-09-30 19:13:58 Z 0 days Testing same since 101231 2016-10-01 01:17:21 Z 0 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Wei Liu <wei.liu2@xxxxxxxxxx> jobs: build-amd64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-oldkern pass build-i386-oldkern pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-armhf-pvops pass build-i386-pvops pass build-amd64-rumprun fail build-i386-rumprun fail test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 pass test-amd64-amd64-xl pass test-armhf-armhf-xl pass test-amd64-i386-xl pass test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-xsm pass test-armhf-armhf-libvirt-xsm fail test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm pass test-armhf-armhf-xl-xsm pass test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-amd64-xl-pvh-amd fail test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-rumprun-amd64 blocked test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-armhf-armhf-xl-arndale pass test-amd64-amd64-xl-credit2 pass test-armhf-armhf-xl-credit2 pass test-armhf-armhf-xl-cubietruck pass test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumprun-i386 blocked test-amd64-amd64-qemuu-nested-intel pass test-amd64-amd64-xl-pvh-intel fail test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt fail test-amd64-i386-libvirt pass test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade pass test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu pass test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub pass test-amd64-amd64-pygrub pass test-armhf-armhf-libvirt-qcow2 fail test-amd64-amd64-xl-qcow2 pass test-armhf-armhf-libvirt-raw fail test-amd64-i386-xl-raw pass test-amd64-amd64-xl-rtds fail test-armhf-armhf-xl-rtds pass test-amd64-i386-xl-qemut-winxpsp3-vcpus1 pass test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-xl-vhd fail test-amd64-amd64-xl-qemut-winxpsp3 pass test-amd64-i386-xl-qemut-winxpsp3 pass test-amd64-amd64-xl-qemuu-winxpsp3 pass test-amd64-i386-xl-qemuu-winxpsp3 pass ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. ------------------------------------------------------------ commit b3d54cead6459567d9786ad415149868ee7f2f5b Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Fri Sep 30 15:10:22 2016 -0400 tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS,NPAGES,UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS. These operations are used during the save process of migration. Instead of doing 64 hypercalls lets do just one. We modify the 'struct xen_tmem_client' structure (used in XEN_SYSCTL_TMEM_OP_[GET|SET]_CLIENT_INFO) to have an extra field 'nr_pools'. Armed with that the code slurping up pages from the hypervisor can allocate a big enough structure (struct tmem_pool_info) to contain all the active pools. And then just iterate over each one and save it in the stream. We are also re-using one of the subcommands numbers for this, as such the XEN_SYSCTL_INTERFACE_VERSION should be incremented and that was done in the patch titled: "tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE].." In the xc_tmem_[save|restore] we also added proper memory handling of the 'buf' and 'pools'. Because of the loops and to make it as easy as possible to review we add a goto label and for almost all error conditions jump in it. The include for inttypes is required for the PRId64 macro to work (which is needed to compile this code under 32-bit). Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit c1469755537f71b5e4a433c29926c89a56337d75 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Fri Sep 30 15:10:01 2016 -0400 tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg. That is what they are used for. Lets make it more clear. Of all the various sub-commands, the only one that needed semantic change is XEN_SYSCTL_TMEM_OP_SAVE_BEGIN. That in the past used 'arg1', and now we are moving it to use 'arg'. Since that code is only used during migration which is tied to the toolstack it is OK to change it. We should increment the XEN_SYSCTL_INTERFACE_VERSION because of this, and that was fortunatly done in the patch titled: "tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE].." While at it, also fix xc_tmem_control_oid to properly handle the 'buf' and bounce it as appropiate. Acked-by: Andrew cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit 4ca5e0103d0c713e9ec9fefe4ca9351abc342ad7 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Mon Sep 26 11:05:09 2016 -0400 tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN] return values. For success they used to be 1 ([SAVE,RESTORE]_BEGIN), 0 if guest did not have any tmem (but only for SAVE_BEGIN), and -1 for any type of failure. And SAVE_END (which you would think would mirror SAVE_BEGIN) had 0 for success and -1 if guest did not any tmem enabled for it. This is confusing. Now the code will return 0 if the operation was success. Various XEN_EXX values are returned if tmem is not enabled or the operation could not performed. The xc_tmem.c code only needs one place to check - where we use SAVE_BEGIN. The place where RESTORE_BEGIN is used will have errno with the proper error value and return will be -1, so will still fail properly. Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit c4a398701ecf07a24ca391cbdc4a84fcca7c8d69 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Fri Sep 30 10:53:01 2016 -0400 tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE].. Specifically: XEN_SYSCTL_TMEM_OP_SET_[WEIGHT,COMPRESS] are now done via: XEN_SYSCTL_TMEM_SET_CLIENT_INFO and XEN_SYSCTL_TMEM_OP_SAVE_GET_[VERSION,MAXPOOLS, CLIENT_WEIGHT, CLIENT_FLAGS] can now be retrieved via: XEN_SYSCTL_TMEM_GET_CLIENT_INFO All this information is now in 'struct xen_tmem_client' and that is what we pass around. We also rev up the XEN_SYSCTL_INTERFACE_VERSION as we are re-using the value number of the deleted ones (and henceforth the information is retrieved differently). On the toolstack, prior to this patch, the xc_tmem_control would use the bounce buffer only when arg1 was set and the cmd was to list. With the 'XEN_SYSCTL_TMEM_OP_SET_[WEIGHT|COMPRESS]' that made sense as the 'arg1' would have the value. However for the other ones (say XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID) the 'arg1' would be the length of the 'buf'. If this confusing don't despair, patch patch titled: tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg. takes care of that. The acute reader of the toolstack code will discover that we only used the bounce buffer for LIST, not for any other subcommands that used 'buf'!?! Which means that the contents of 'buf' would never be copied back to the calleer 'buf'! The author is not sure how this could possibly work, perhaps Xen 4.1 (when this was introduced) was more relaxed about the bounce buffer being enabled. Anyhow this fixes xc_tmem_control to do it for any subcommand that has 'arg1'. Lastly some of the checks in xc_tmem_[restore|save] are removed as they can't ever be reached (not even sure how they could have been reached in the original submission). One of them is the check for the weight against -1 when in fact the hypervisor would never have provided that value. Now the checks are simple - as the hypercall always returns ->version and ->maxpools (which is mirroring how it was done prior to this patch). But if one wants to check the if a guest has any tmem activity then the patch titled "tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_ [FLAGS,NPAGES,UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS." adds an ->nr_pools to check for that. Also we add the check for ->version and ->maxpools and remove the TODO. Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit 92928ab27709fe6eab50648aa7a9aaad7ede8da4 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Fri Sep 30 10:50:32 2016 -0400 tmem/sysctl: Add union in struct xen_sysctl_tmem_op No functional change. We do this to prepare for another entry to be added in the union. See patch titled: "tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE]" Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit dd24b046b94efe2f97b906123f2eefb3235116b1 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Fri Sep 30 10:10:42 2016 -0400 tmem: Move client weight, frozen, live_migrating, and compress in its own structure. This paves the way to make only one hypercall to retrieve/set this information instead of multiple ones. Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit 1a70e54b5fd7b8a3b04c739a4de27b2509b42180 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Tue Sep 27 09:40:22 2016 -0400 tmem: Delete deduplication (and tze) code. Couple of reasons: - It can lead to security issues (see row-hammer, KSM and such attacks). - Code is quite complex. - Deduplication is good if the pages themselves are the same but that is hardly guaranteed. - We got some gains (if pages are deduped) but at the cost of making code less maintainable. - tze depends on deduplication code. As such, deleting it. Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit 840c5c9e6675a7ae9665fcc610b3112d1da8b672 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Wed Sep 21 16:53:51 2016 -0400 tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP] It is not used by anything. Its intent was to complement the 'weight' attribute but there hadn't been any request for this. If there is a need to resurface it, it can be integrated back via the XEN_SYSCTL_TMEM_SET_CLIENT_INFO introduced in "tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE].." Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit f62c86e4977e7498e81211364440ec995253dd30 Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Date: Wed Sep 21 21:18:57 2016 -0400 libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION The only thing this hypercall returns is TMEM_SPEC_VERSION. The comment around is also misleading - this call does not do any domain operation. Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> (qemu changes not included) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |