[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [xen-4.10-testing test] 128607: regressions - FAIL
flight 128607 xen-4.10-testing real [real] http://logs.test-lab.xenproject.org/osstest/logs/128607/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-arm64-xsm 6 xen-build fail REGR. vs. 128108 build-arm64 6 xen-build fail REGR. vs. 128108 build-armhf 6 xen-build fail REGR. vs. 128108 Tests which are failing intermittently (not blocking): test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore.2 fail pass in 128524 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 15 guest-saverestore.2 fail pass in 128524 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail pass in 128524 Tests which did not succeed, but are not blocking: test-arm64-arm64-xl-credit1 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-arm64-arm64-xl-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-arm64-arm64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-arm64-arm64-xl 1 build-check(1) blocked n/a test-arm64-arm64-libvirt-xsm 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a build-arm64-libvirt 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit1 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail in 128524 never pass test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass test-amd64-i386-libvirt 13 migrate-support-check fail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass test-amd64-amd64-libvirt 13 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass version targeted for testing: xen 788948bebcecca69bfac47e5514f2dc351dabad9 baseline version: xen 0c1d5b68e27da167a51c2ea828636c14ff5c017b Last test of basis 128108 2018-09-26 15:03:39 Z 15 days Failing since 128505 2018-10-08 13:07:50 Z 3 days 3 attempts Testing same since 128524 2018-10-09 11:05:45 Z 2 days 2 attempts ------------------------------------------------------------ People who touched revisions under test: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Daniel Kiper <daniel.kiper@xxxxxxxxxx> Dario Faggioli <dfaggioli@xxxxxxxx> Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Paul Durrant <paul.durrant@xxxxxxxxxx> Roger Pau Monné <roger.pau@xxxxxxxxxx> Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> Wei Liu <wei.liu2@xxxxxxxxxx> jobs: build-amd64-xsm pass build-arm64-xsm fail build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-arm64 fail build-armhf fail build-i386 pass build-amd64-libvirt pass build-arm64-libvirt blocked build-armhf-libvirt blocked build-i386-libvirt pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-arm64-pvops pass build-armhf-pvops pass build-i386-pvops pass build-amd64-rumprun pass build-i386-rumprun pass test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 pass test-amd64-amd64-xl pass test-arm64-arm64-xl blocked test-armhf-armhf-xl blocked test-amd64-i386-xl pass test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm fail test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-xsm pass test-arm64-arm64-libvirt-xsm blocked test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm pass test-arm64-arm64-xl-xsm blocked test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-amd64-xl-pvhv2-amd pass test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 fail test-amd64-amd64-rumprun-amd64 pass test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-amd64-amd64-xl-qemut-ws16-amd64 fail test-amd64-i386-xl-qemut-ws16-amd64 fail test-amd64-amd64-xl-qemuu-ws16-amd64 fail test-amd64-i386-xl-qemuu-ws16-amd64 fail test-armhf-armhf-xl-arndale blocked test-amd64-amd64-xl-credit1 pass test-arm64-arm64-xl-credit1 blocked test-armhf-armhf-xl-credit1 blocked test-amd64-amd64-xl-credit2 pass test-arm64-arm64-xl-credit2 blocked test-armhf-armhf-xl-credit2 blocked test-armhf-armhf-xl-cubietruck blocked test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict fail test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict fail test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumprun-i386 pass test-amd64-amd64-xl-qemut-win10-i386 fail test-amd64-i386-xl-qemut-win10-i386 fail test-amd64-amd64-xl-qemuu-win10-i386 fail test-amd64-i386-xl-qemuu-win10-i386 fail test-amd64-amd64-qemuu-nested-intel pass test-amd64-amd64-xl-pvhv2-intel pass test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt blocked test-amd64-i386-libvirt pass test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade pass test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu blocked test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub pass test-amd64-amd64-pygrub pass test-amd64-amd64-xl-qcow2 pass test-armhf-armhf-libvirt-raw blocked test-amd64-i386-xl-raw pass test-amd64-amd64-xl-rtds pass test-armhf-armhf-xl-rtds blocked test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow pass test-amd64-amd64-xl-shadow pass test-amd64-i386-xl-shadow pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-xl-vhd blocked ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. ------------------------------------------------------------ commit 788948bebcecca69bfac47e5514f2dc351dabad9 Author: Wei Liu <wei.liu2@xxxxxxxxxx> Date: Mon Aug 20 09:38:18 2018 +0100 tools/tests: fix an xs-test.c issue The ret variable can be used uninitialised when iters is 0. Initialise ret at the beginning to fix this issue. Reported-by: Steven Haigh <netwiz@xxxxxxxxx> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx> Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> (cherry picked from commit 3a2b8525b883baa87fe89b3da58f5c09fa599b99) (cherry picked from commit 33664f9a05401fac8f2c0be0bb7ee8a1851e4dcf) commit 61dc0159b69bd3eec109188386c8b13fbdfed7b2 Author: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Date: Mon Oct 8 14:40:21 2018 +0200 x86/boot: Allocate one extra module slot for Xen image placement Commit 9589927 (x86/mb2: avoid Xen image when looking for module/crashkernel position) fixed relocation issues for Multiboot2 protocol. Unfortunately it missed to allocate module slot for Xen image placement in early boot path. So, let's fix it right now. Reported-by: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> master commit: 4c5f9dbebc0bd2afee1ecd936c74ffe65756950f master date: 2018-09-27 11:17:47 +0100 commit d86c9aeae6cb753e931e00f7ee020d73df9070c0 Author: Dario Faggioli <dfaggioli@xxxxxxxx> Date: Mon Oct 8 14:39:46 2018 +0200 xen: sched/Credit2: fix bug when moving CPUs between two Credit2 cpupools Whether or not a CPU is assigned to a runqueue (and, if yes, to which one) within a Credit2 scheduler instance must be both a per-cpu and per-scheduler instance one. In fact, when we move a CPU between cpupools, we first setup its per-cpu data in the new pool, and then cleanup its per-cpu data from the old pool. In Credit2, when there currently is no per-scheduler, per-cpu data (as the cpu-to-runqueue map is stored on a per-cpu basis only), this means that the cleanup of the old per-cpu data can mess with the new per-cpu data, leading to crashes like this: https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg23306.html https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg23350.html Basically, when csched2_deinit_pdata() is called for CPU 13, for fully removing the CPU from Pool-0, per_cpu(13,runq_map) already contain the id of the runqueue to which the CPU has been assigned in the scheduler of Pool-1, which means wrong runqueue manipulations happen in Pool-0's scheduler. Furthermore, at the end of such call, that same runq_map is updated with -1, which is what causes the BUG_ON in csched2_schedule(), on CPU 13, to trigger. So, instead of reverting a2c4e5ab59d "xen: credit2: make the cpu to runqueue map per-cpu" (as we don't want to go back to having the huge array in struct csched2_private) add a per-cpu scheduler specific data structure, like, for instance, Credit1 has already. That (for now) only contains one field: the id of the runqueue the CPU is assigned to. Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx> Reviewed-by: Juergen Gross <jgross@xxxxxxxx> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx> master commit: 6e395f477fb854f11de83a951a070d3aacb6dc59 master date: 2018-09-18 16:50:44 +0100 commit 45197905fc5c2151960dfe6f039a5a2e14f0b4aa Author: Paul Durrant <paul.durrant@xxxxxxxxxx> Date: Mon Oct 8 14:39:10 2018 +0200 x86/hvm/emulate: make sure rep I/O emulation does not cross GFN boundaries When emulating a rep I/O operation it is possible that the ioreq will describe a single operation that spans multiple GFNs. This is fine as long as all those GFNs fall within an MMIO region covered by a single device model, but unfortunately the higher levels of the emulation code do not guarantee that. This is something that should almost certainly be fixed, but in the meantime this patch makes sure that MMIO is truncated at GFN boundaries and hence the appropriate device model is re-evaluated for each target GFN. NOTE: This patch does not deal with the case of a single MMIO operation spanning a GFN boundary. That is more complex to deal with and is deferred to a subsequent patch. Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Convert calculations to be 32-bit only. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 7626edeaca972e3e823535dcc44338f6b2f0b21f master date: 2018-08-16 09:27:30 +0200 commit 54838353189600af183ef09829276162f4b5e7f9 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Mon Oct 8 14:38:34 2018 +0200 x86/cpuidle: don't init stats lock more than once Osstest flight 122363, having hit an NMI watchdog timeout, shows CPU1 at Xen call trace: [<ffff82d08023d3f4>] _spin_lock+0x30/0x57 [<ffff82d0802d9346>] update_last_cx_stat+0x29/0x42 [<ffff82d0802d96f3>] cpu_idle.c#acpi_processor_idle+0x2ff/0x596 [<ffff82d080276713>] domain.c#idle_loop+0xa8/0xc3 and CPU0 at Xen call trace: [<ffff82d08023d173>] on_selected_cpus+0xb7/0xde [<ffff82d0802dbe22>] powernow.c#powernow_cpufreq_target+0x110/0x1cb [<ffff82d080257973>] __cpufreq_driver_target+0x43/0xa6 [<ffff82d080256b0d>] cpufreq_governor_dbs+0x324/0x37a [<ffff82d080257bf2>] __cpufreq_set_policy+0xfa/0x19d [<ffff82d080256044>] cpufreq_add_cpu+0x3a1/0x5df [<ffff82d0802dbab4>] cpufreq_cpu_init+0x17/0x1a [<ffff82d0802567a8>] set_px_pminfo+0x2b6/0x2f7 [<ffff82d08029f1bf>] do_platform_op+0xe75/0x1977 [<ffff82d0803712c5>] pv_hypercall+0x1f4/0x440 [<ffff82d0803784a5>] lstar_enter+0x115/0x120 That is, Dom0's ACPI processor driver is in the process of uploading Px and Cx data. Looking at the ticket lock state in CPU1's registers, it is waiting for ticket 0x0000 to have its turn, while the supposed current owner's ticket is 0x0001, which is an invalid state (and neither of the other two CPUs holds the lock anyway). Hence I can only conclude that cpuidle_init_cpu(1) ran on CPU 0 while some other CPU held the lock (the unlock then put the lock in the state that CPU1 is observing). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> master commit: 2f64a251fa10dd4d62f84967e3dafa709f5e96ab master date: 2018-04-27 14:35:35 +0200 commit 518726dc1dd1a11668c841f4d6ea47beca18119a Author: Roger Pau Monné <roger.pau@xxxxxxxxxx> Date: Mon Oct 8 14:37:25 2018 +0200 x86/efi: split compiler vs linker support So that an ELF binary with support for EFI services will be built when the compiler supports the MS ABI, regardless of the linker support for PE. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Tested-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> master commit: 93249f7fc17c1f3a2aa8bf9ea055aa326e93a4ae master date: 2018-07-31 10:25:06 +0200 commit d091a49f89e979ca4ca7dc583c1f8ef7d1312a48 Author: Roger Pau Monné <roger.pau@xxxxxxxxxx> Date: Mon Oct 8 14:36:38 2018 +0200 x86/efi: move the logic to detect PE build support So that it can be used by other components apart from the efi specific code. By moving the detection code creating a dummy efi/disabled file can be avoided. This is required so that the conditional used to define the efi symbol in the linker script can be removed and instead the definition of the efi symbol can be guarded using the preprocessor. The motivation behind this change is to be able to build Xen using lld (the LLVM linker), that at least on version 6.0.0 doesn't work properly with a DEFINED being used in a conditional expression: ld -melf_x86_64_fbsd -T xen.lds -N prelink.o --build-id=sha1 \ /root/src/xen/xen/common/symbols-dummy.o -o /root/src/xen/xen/.xen-syms.0 ld: error: xen.lds:233: symbol not found: efi Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Tested-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> master commit: 18cd4997d26b9df95dda87503e41c823279a07a0 master date: 2018-07-31 10:24:22 +0200 commit 923af25a470ccb8fd4e4562d85012a79b5e632a7 Author: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> Date: Mon Oct 8 14:35:20 2018 +0200 x86/shutdown: use ACPI reboot method for Dell PowerEdge R540 When EFI booting the Dell PowerEdge R540 it consistently wanders into the weeds and gets an invalid opcode in the EFI ResetSystem call. This is the same bug which affects the PowerEdge R740 so fix it in the same way: quirk this hardware to use the ACPI reboot method instead. BIOS Information Vendor: Dell Inc. Version: 1.3.7 Release Date: 02/09/2018 System Information Manufacturer: Dell Inc. Product Name: PowerEdge R540 Signed-off-by: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 328ca55b7bd47e1324b75cce2a6c461308ecf93d master date: 2018-06-28 09:29:13 +0200 commit 5ba0bb072aa7274be1fdd43f581b895dc78e60e1 Author: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> Date: Mon Oct 8 14:33:56 2018 +0200 x86/shutdown: use ACPI reboot method for Dell PowerEdge R740 When EFI booting the Dell PowerEdge R740, it consistently wanders into the weeds and gets an invalid opcode in the EFI ResetSystem call. Quirk this hardware to use the ACPI reboot method instead. Example stack trace: ----[ Xen-4.11-unstable x86_64 debug=n Not tainted ]---- CPU: 0 RIP: e008:[<0000000000000017>] 0000000000000017 RFLAGS: 0000000000010202 CONTEXT: hypervisor rax: 0000000066eb2ff0 rbx: ffff83005f627c20 rcx: 000000006c54e100 rdx: 0000000000000000 rsi: 0000000000000065 rdi: 000000107355f000 rbp: ffff83005f627c70 rsp: ffff83005f627b48 r8: ffff83005f627b90 r9: 0000000000000000 r10: ffff83005f627c88 r11: 0000000000000000 r12: 0000000000000000 r13: 0000000000000cf9 r14: 0000000000000065 r15: ffff830000000000 cr0: 0000000080050033 cr4: 00000000003526e0 cr3: 000000107355f000 cr2: ffffc90000cff000 fsb: 0000000000000000 gsb: ffff88019f600000 gss: 0000000000000000 ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008 Xen code around <0000000000000017> (0000000000000017): f0 d8 dd 00 f0 54 ff 00 <f0> 50 dd 00 f0 d8 dd 00 f0 a5 fe 00 f0 87 e9 00 Xen stack trace from rsp=ffff83005f627b48: ffff83005f627b50 ffffffffffffffda 000000006c547aaa ffff82d000000001 ffff83005f627bec 000000107355f000 000000006c546fb8 ffff83107ffe3240 0000000000000000 0000000000000000 8000000000000002 0000000000000000 000000006c546b95 000000006c54c700 ffff83005f627bdc ffff83005f627be8 000000005f616000 ffff83005f627c20 0000000000000000 0000000000000cf9 ffff820080350001 000000000000000b ffff82d080351eda 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000005f616000 0000000000000000 ffff82d08095ff60 ffff82d08095ff60 000000f100000000 ffff82d080296097 000000000000e008 0000000000000000 ffff83005f627c88 0000000000000000 00000000fffffffe ffff82d0802959d2 ffff82d0802959d2 000000008095f300 000000005f627c9c 00000000000000f8 0000000000000000 00000000000000f8 ffff82d080932c00 0000000000000000 ffff82d08095f7c8 ffff82d080932c00 0000000000000000 0000000000000000 ffff82d080295a9b ffff83005f627d98 ffff82d0802361f3 ffff82d080932c00 0000000080000000 ffff83005f627d98 ffff82d080279a19 ffff82d08095f02c ffff82d080000000 0000000000000000 00000000000000fb 0000000000000000 00000071484e54f6 ffff831073542098 ffff82d08093ac78 ffff831072befd30 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 ffff82d08034f185 ffff82d080949460 0000000000000000 ffff82d08095f270 0000000000000008 ffff83107357ae20 0000007146ce4bd3 Xen call trace: [<0000000000000017>] 0000000000000017 [<ffff82d080351eda>] efi_reset_system+0x5a/0x90 [<ffff82d080296097>] smp_send_stop+0x97/0xa0 [<ffff82d0802959d2>] machine_restart+0x212/0x2d0 [<ffff82d0802959d2>] machine_restart+0x212/0x2d0 [<ffff82d080295a9b>] shutdown.c#__machine_restart+0xb/0x10 [<ffff82d0802361f3>] smp_call_function_interrupt+0x53/0x80 [<ffff82d080279a19>] do_IRQ+0x259/0x660 [<ffff82d08034f185>] common_interrupt+0x85/0x90 [<ffff82d0802c6152>] mwait-idle.c#mwait_idle+0x242/0x390 [<ffff82d08026b446>] domain.c#idle_loop+0x86/0xc0 **************************************** Panic on CPU 0: FATAL TRAP: vector = 6 (invalid opcode) **************************************** dmidecode info: BIOS Information: Vendor: Dell Inc. Version: 1.2.11 Release Date: 10/19/2017 BIOS Revision: 1.2 System Information: Manufacturer: Dell Inc. Product Name: PowerEdge R740 Signed-off-by: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: f97f774b5aa6b471d1fed1c451c89ec7457dadf2 master date: 2018-01-24 18:01:00 +0100 commit 173c33800649ade708ad369cf34f3af338490f1c Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Mon Oct 8 14:32:16 2018 +0200 update Xen version to 4.10.3-pre (qemu changes not included) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |