[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [xen-4.4-testing test] 61782: regressions - FAIL
flight 61782 xen-4.4-testing real [real] http://logs.test-lab.xenproject.org/osstest/logs/61782/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-xl-qcow2 9 debian-di-install fail REGR. vs. 60727 test-amd64-i386-xl-raw 9 debian-di-install fail REGR. vs. 60727 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 60727 Regressions which are regarded as allowable (not blocking): test-amd64-i386-libvirt-vhd 9 debian-di-install fail REGR. vs. 60727 test-amd64-amd64-libvirt-raw 9 debian-di-install fail REGR. vs. 60727 test-amd64-amd64-libvirt-vhd 9 debian-di-install fail REGR. vs. 60727 test-armhf-armhf-xl-multivcpu 16 guest-start/debian.repeat fail like 60696 test-amd64-i386-xl-vhd 9 debian-di-install fail like 60727 test-amd64-i386-libvirt-qcow2 9 debian-di-install fail like 60727 test-amd64-amd64-xl-vhd 9 debian-di-install fail like 60727 test-amd64-i386-libvirt 11 guest-start fail like 60727 test-amd64-amd64-libvirt 11 guest-start fail like 60727 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 60727 Tests which did not succeed, but are not blocking: test-amd64-amd64-rumpuserxen-amd64 1 build-check(1) blocked n/a test-amd64-amd64-migrupgrade 1 build-check(1) blocked n/a test-amd64-i386-migrupgrade 1 build-check(1) blocked n/a test-amd64-i386-rumpuserxen-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-qcow2 9 debian-di-install fail never pass test-armhf-armhf-xl-qcow2 9 debian-di-install fail never pass build-amd64-rumpuserxen 6 xen-build fail never pass build-i386-rumpuserxen 6 xen-build fail never pass test-amd64-amd64-xl-qcow2 9 debian-di-install fail never pass build-amd64-prev 5 xen-build fail never pass test-armhf-armhf-libvirt-raw 9 debian-di-install fail never pass test-armhf-armhf-xl-vhd 9 debian-di-install fail never pass build-i386-prev 5 xen-build fail never pass test-armhf-armhf-xl-raw 9 debian-di-install fail never pass test-armhf-armhf-libvirt 11 guest-start fail never pass test-armhf-armhf-xl-arndale 12 migrate-support-check fail never pass test-armhf-armhf-xl-arndale 13 saverestore-support-check fail never pass test-amd64-i386-libvirt-pair 21 guest-migrate/src_host/dst_host fail never pass test-amd64-amd64-libvirt-pair 21 guest-migrate/src_host/dst_host fail never pass test-amd64-i386-libvirt-raw 11 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 13 saverestore-support-check fail never pass test-armhf-armhf-xl-multivcpu 12 migrate-support-check fail never pass test-amd64-amd64-libvirt-qcow2 11 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 13 saverestore-support-check fail never pass test-armhf-armhf-xl-credit2 12 migrate-support-check fail never pass test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail never pass test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail never pass test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xend-qemut-winxpsp3 21 leak-check/check fail never pass test-armhf-armhf-xl 12 migrate-support-check fail never pass test-armhf-armhf-xl 13 saverestore-support-check fail never pass test-armhf-armhf-libvirt-vhd 9 debian-di-install fail never pass version targeted for testing: xen dbded5568e3978360bd044b13891fb81471945b7 baseline version: xen 3646b134c1673f09c0a239de10b0da4c9265c8e8 Last test of basis 60727 2015-08-16 16:15:09 Z 27 days Failing since 60802 2015-08-20 14:41:37 Z 23 days 11 attempts Testing same since 61782 2015-09-11 07:48:50 Z 2 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Anshul Makkar <anshul.makkar@xxxxxxxxxx> Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx> Ian Campbell <ian.campbell@xxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Jim Fehlig <jfehlig@xxxxxxxx> Julien Grall <julien.grall@xxxxxxxxxx> Roger Pau Monné <roger.pau@xxxxxxxxxx> Wei Liu <wei.liu2@xxxxxxxxxx> jobs: build-amd64-xend pass build-i386-xend pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-prev fail build-i386-prev fail build-amd64-pvops pass build-armhf-pvops pass build-i386-pvops pass build-amd64-rumpuserxen fail build-i386-rumpuserxen fail test-amd64-amd64-xl pass test-armhf-armhf-xl pass test-amd64-i386-xl pass test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-rumpuserxen-amd64 blocked test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-armhf-armhf-xl-arndale pass test-amd64-amd64-xl-credit2 pass test-armhf-armhf-xl-credit2 pass test-armhf-armhf-xl-cubietruck pass test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumpuserxen-i386 blocked test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt fail test-armhf-armhf-libvirt fail test-amd64-i386-libvirt fail test-amd64-amd64-migrupgrade blocked test-amd64-i386-migrupgrade blocked test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu fail test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair fail test-amd64-i386-libvirt-pair fail test-amd64-amd64-pv pass test-amd64-i386-pv pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub pass test-amd64-amd64-pygrub pass test-amd64-amd64-libvirt-qcow2 pass test-armhf-armhf-libvirt-qcow2 fail test-amd64-i386-libvirt-qcow2 fail test-amd64-amd64-xl-qcow2 fail test-armhf-armhf-xl-qcow2 fail test-amd64-i386-xl-qcow2 fail test-amd64-amd64-libvirt-raw fail test-armhf-armhf-libvirt-raw fail test-amd64-i386-libvirt-raw pass test-amd64-amd64-xl-raw pass test-armhf-armhf-xl-raw fail test-amd64-i386-xl-raw fail test-amd64-i386-xl-qemut-winxpsp3-vcpus1 pass test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass test-amd64-amd64-libvirt-vhd fail test-armhf-armhf-libvirt-vhd fail test-amd64-i386-libvirt-vhd fail test-amd64-amd64-xl-vhd fail test-armhf-armhf-xl-vhd fail test-amd64-i386-xl-vhd fail test-amd64-i386-xend-qemut-winxpsp3 fail test-amd64-amd64-xl-qemut-winxpsp3 pass test-amd64-amd64-xl-qemuu-winxpsp3 pass ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. ------------------------------------------------------------ commit dbded5568e3978360bd044b13891fb81471945b7 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Thu Sep 10 15:58:41 2015 +0200 x86/NUMA: make init_node_heap() respect Xen heap limit On NUMA systems, where we try to use node local memory for the basic control structures of the buddy allocator, this special case needs to take into consideration a possible address width limit placed on the Xen heap. In turn this (but also other, more abstract considerations) requires that xenheap_max_mfn() not be called more than once (at most we might permit it to be called a second time with a larger value than was passed the first time), and be called only before calling end_boot_allocator(). While inspecting all the involved code, a couple of off-by-one issues were found (and are being corrected here at once): - arch_init_memory() cleared one too many page table slots - the highmem_start based invocation of xenheap_max_mfn() passed too big a value - xenheap_max_mfn() calculated the wrong bit count in edge cases Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx> xen/arm64: do not (incorrectly) limit size of xenheap The commit 88e3ed61642bb393458acc7a9bd2f96edc337190 "x86/NUMA: make init_node_heap() respect Xen heap limit" breaks boot on the arm64 board X-Gene. The xenheap bits variable is used to know the last RAM MFN always mapped in Xen virtual memory. If the value is 0, it means that all the memory is always mapped in Xen virtual memory. On X-gene the RAM bank resides above 128GB and last xenheap MFN is 0x4400000. With the new way to calculate the number of bits, xenheap_bits will be equal to 38 bits. This will result to hide all the RAM and the impossibility to allocate xenheap memory. Given that aarch64 have always all the memory mapped in Xen virtual memory, it's not necessary to call xenheap_max_mfn which set the number of bits. Suggested-by: Jan Beulich <jbeulich@xxxxxxxx> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx> master commit: 88e3ed61642bb393458acc7a9bd2f96edc337190 master date: 2015-09-01 14:02:57 +0200 master commit: 0a7167d9b20cdc48e6ea320fbbb920b3267c9757 master date: 2015-09-04 14:58:07 +0100 commit e554ae491d780435713381e43deb8356083be3ee Author: Julien Grall <julien.grall@xxxxxxxxxx> Date: Thu Sep 10 15:56:03 2015 +0200 mm: populate_physmap: validate correctly the gfn for direct mapped domain Direct mapped domain has already the memory allocated 1:1, so we are directly using the gfn as mfn to map the RAM in the guest. While we are validating that the page associated to the first mfn belongs to the domain, the subsequent MFN are not validated when the extent_order is > 0. This may result to map memory region (MMIO, RAM) which doesn't belong to the domain. Although, only DOM0 on ARM is using a direct memory mapped. So it doesn't affect any guest (at least on the upstream version) or even x86. Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx> master commit: 9503ab0e9c6a41a1ee7a70c8ea9313d08ebaa8c5 master date: 2015-08-13 14:41:09 +0200 commit e19042ffbdf07e217c827eb0f722be5fe1623ea3 Author: Anshul Makkar <anshul.makkar@xxxxxxxxxx> Date: Thu Sep 10 15:55:23 2015 +0200 x86/mm: Make {hap, shadow}_teardown() preemptible A domain with sufficient shadow allocation can cause a watchdog timeout during domain destruction. Expand the existing -EAGAIN logic in paging_teardown() to allow {hap/sh}_set_allocation() to become restartable during the DOMCTL_destroydomain hypercall. Signed-off-by: Anshul Makkar <anshul.makkar@xxxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> master commit: 0174da5b79752e2d5d6ca0faed89536e8f3d91c7 master date: 2015-08-06 10:04:43 +0100 commit cfb5d2001784dfdec638ba335fd9252f5833ee2d Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Thu Sep 10 15:54:13 2015 +0200 x86/NUMA: don't account hotplug regions ... except in cases where they really matter: node_memblk_range[] now is the only place all regions get stored. nodes[] and NODE_DATA() track present memory only. This improves the reporting when nodes have disjoint "normal" and hotplug regions, with the hotplug region sitting above the highest populated page. In such cases a node's spanned-pages value (visible in both XEN_SYSCTL_numainfo and 'u' debug key output) covered all the way up to top of populated memory, giving quite different a picture from what an otherwise identically configured system without and hotplug regions would report. Note, however, that the actual hotplug case (as well as cases of nodes with multiple disjoint present regions) is still not being handled such that the reported values would represent how much memory a node really has (but that can be considered intentional). Reported-by: Jim Fehlig <jfehlig@xxxxxxxx> This at once makes nodes_cover_memory() no longer consider E820_RAM regions covered by SRAT hotplug regions. Also reject self-overlaps with mismatching hotplug flags. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Tested-by: Jim Fehlig <jfehlig@xxxxxxxx> master commit: c011f470e6e79208f5baa071b4d072b78c88e2ba master date: 2015-08-31 13:52:24 +0200 commit 8bea7194a645d5ecb27ad2874eeff7a5734510ce Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Thu Sep 10 15:53:37 2015 +0200 x86/NUMA: fix setup_node() The function referenced an __initdata object (nodes_found). Since this being a node mask was more complicated than needed, the variable gets replaced by a simple counter. Check at once that the count of nodes doesn't go beyond MAX_NUMNODES. Also consolidate three printk()s related to the function's use into just one. Finally (quite the opposite of the above issue) __init-annotate nodes_cover_memory(). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> master commit: 8f945d36d9bddd5b589ba23c7322b30d623dd084 master date: 2015-08-31 13:51:52 +0200 commit 181ebad4a0e9f140054208b17e4936f85c4ee39c Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Thu Sep 10 15:52:28 2015 +0200 IOMMU: skip domains without page tables when dumping Reported-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Tested-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> master commit: 5f335544cf5b716b0af51223e33373c4a7d65e8c master date: 2015-08-27 17:40:38 +0200 commit 9a00f96bb09ff8642bef2a3edde855a924093614 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Thu Sep 10 15:51:56 2015 +0200 x86/IO-APIC: don't create pIRQ mapping from masked RTE While moving our XenoLinux patches to 4.2-rc I noticed bogus "already mapped" messages resulting from Linux (legitimately) writing RTEs with only the mask bit set. Clearly we shouldn't even attempt to create a pIRQ <-> IRQ mapping from such RTEs. In the course of this I also found that the respective message isn't really useful without also printing the pre-existing mapping. And I noticed that map_domain_pirq() allowed IRQ0 to get through, despite us never allowing a domain to control that interrupt. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> master commit: 669d4b85c433674ab3b52ef707af0d3a551c941f master date: 2015-08-25 16:18:31 +0200 commit 6657f1b596c046c676d280ae6d57ae59de17804e Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx> Date: Thu Sep 10 15:51:02 2015 +0200 x86, amd_ucode: skip microcode updates for final levels Some of older[Fam10h] systems require that certain number of applied microcode patch levels should not be overwritten by the microcode loader. Otherwise, system hangs are known to occur. The 'final_levels' of patch ids have been obtained empirically. Refer bug https://bugzilla.suse.com/show_bug.cgi?id=913996 for details of the issue. The short version is that people have predominantly noticed system hang issues when trying to update microcode levels beyond the patch IDs below. [0x01000098, 0x0100009f, 0x010000af] From internal discussions, we gathered that OS/hypervisor cannot reliably perform microcode updates beyond these levels due to hardware issues. Therefore, we need to abort microcode update process if we hit any of these levels. In this patch, we check for those microcode versions and abort if the current core has one of those final patch levels applied by the BIOS A linux version of the patch has already made it into tip- http://marc.info/?l=linux-kernel&m=143703405627170 Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> master commit: 22c5675877c8209adcfdb6bceddb561320374529 master date: 2015-08-25 16:17:13 +0200 commit 23c132291d62931be5cc58a67aaa757ebba83ddc Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Thu Sep 10 15:49:59 2015 +0200 x86/gdt: Drop write-only, xalloc()'d array from set_gdt() It is not used, and can cause a spurious failure of the set_gdt() hypercall in low memory situations. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx> Reviewed-by: Ian Campbell <ian.campbell@xxxxxxxxxx> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> master commit: a7bd9b1661304500cd18b7d216d616ecf053ebdb master date: 2015-08-05 10:32:45 +0100 commit ff9758b54ffe27ce2961de5412c0dc5af9c6abef Author: Wei Liu <wei.liu2@xxxxxxxxxx> Date: Wed Sep 9 16:14:16 2015 +0200 Config.mk: update in-tree OVMF changeset Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx> master commit: 28e5d9a9ad8e7e1a50503ec97ce9d20cd451a5d1 master date: 2015-06-30 16:16:47 +0100 commit 339f5743e84a28dd01ffa7498372e410301cd0b4 Author: Julien Grall <julien.grall@xxxxxxxxxx> Date: Thu Aug 13 12:03:43 2015 +0100 xen/arm: mm: Do not dump the p2m when mapping a foreign gfn The physmap operation XENMAPSPACE_gfmn_foreign is dumping the p2m when an error occured by calling dump_p2m_lookup. But this function is not using ratelimited printk. Any domain able to map foreign gfmn would be able to flood the Xen console. The information wasn't not useful so drop it. This is XSA-141. Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx> (cherry picked from commit afc13fe5e21d18c09e44f8ae6f7f4484e9f1de7f) commit 5b6f36000c40654491ab84a0c55af37129ec4793 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Sep 2 14:47:25 2015 +0200 update Xen version to 4.4.4-pre commit 27b82b08b17f589f638f3d5be8dcea42b5e73330 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Thu Aug 20 16:19:38 2015 +0200 update Xen version to 4.4.3 (qemu changes not included) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |