[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [xen-unstable test] 118594: regressions - FAIL
>>> On 06.02.18 at 06:40, <osstest-admin@xxxxxxxxxxxxxx> wrote: > flight 118594 xen-unstable real [real] > http://logs.test-lab.xenproject.org/osstest/logs/118594/ > > Regressions :-( > > Tests which did not succeed and are blocking, > including tests which could not be run: > test-amd64-amd64-xl-qemut-ws16-amd64 7 xen-boot fail REGR. vs. > 118582 (XEN) Watchdog timer detects that CPU2 is stuck! (XEN) ----[ Xen-4.11-unstable x86_64 debug=y Not tainted ]---- (XEN) CPU: 2 (XEN) RIP: e008:[<ffff82d08023c5fa>] _spin_lock+0x30/0x57 (XEN) RFLAGS: 0000000000000297 CONTEXT: hypervisor (d0v2) (XEN) rax: 000000000000d407 rbx: ffff82d080467fd0 rcx: ffff82d080467fd0 (XEN) rdx: 000000000000d408 rsi: 000000000000d407 rdi: ffff82d080467fd6 (XEN) rbp: ffff8302187b7af8 rsp: ffff8302187b7ae8 r8: 0000000000000000 (XEN) r9: ffff8302186f8000 r10: 0000000000000000 r11: 0000000000000000 (XEN) r12: 0000000000000002 r13: ffff8302187bcf90 r14: 0000000000000000 (XEN) r15: ffff82e0041b3420 cr0: 0000000080050033 cr4: 00000000000406e0 (XEN) cr3: 0000000211e08000 cr2: 00007f28051300b0 (XEN) fsb: 00007f28056c4700 gsb: ffff88001fc80000 gss: 0000000000000000 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008 (XEN) Xen code around <ffff82d08023c5fa> (_spin_lock+0x30/0x57): (XEN) 0f 48 89 d9 89 c2 f3 90 <66> 8b 01 66 39 d0 75 f6 48 8d 05 a3 2d 37 00 48 (XEN) Xen stack trace from rsp=ffff8302187b7ae8: ... (XEN) Xen call trace: (XEN) [<ffff82d08023c5fa>] _spin_lock+0x30/0x57 (XEN) [<ffff82d08029f08d>] flush_area_mask+0xae/0x133 (XEN) [<ffff82d08028a109>] mm.c#_get_page_type+0x2fe/0x1757 (XEN) [<ffff82d08028b570>] get_page_type+0xe/0x29 (XEN) [<ffff82d080289b8b>] get_page_from_l1e+0x4f9/0x779 (XEN) [<ffff82d08028ce22>] mm.c#mod_l1_entry+0x7e4/0x819 (XEN) [<ffff82d0802919a5>] mm.c#__do_update_va_mapping+0x163/0x67f (XEN) [<ffff82d080291edb>] do_update_va_mapping+0x1a/0x1e (XEN) [<ffff82d08036050c>] arch_do_multicall_call+0x65/0x12a (XEN) [<ffff82d080224389>] do_multicall+0x247/0x44c (XEN) [<ffff82d080360269>] pv_hypercall+0x1ef/0x42d (XEN) [<ffff82d080364c58>] x86_64/entry.S#test_all_events+0/0x30 (XEN) (XEN) CPU1 @ e008:ffff82d08020237d (0000000000000000) In __bitmap_empty() (I would guess called from on_selected_cpus()). (XEN) CPU4 @ e008:ffff82d08023c5fa (0000000000000000) (XEN) CPU0 @ e008:ffff82d08023c5fa (0000000000000000) Both in spin_lock(), just like above. (XEN) CPU5 @ e008:ffff82d08023c381 (0000000000000000) In on_selected_cpus(). (XEN) CPU3 @ e008:ffff82d0802ca087 (0000000000000000) Right after the HLT in acpi_idle_do_entry(). If we assume that CPUs 0 and 4 also try to acquire flush_lock, and given that CPU3 is idle, it must be CPU1 or CPU5 which holds flush_lock, but right now I can't see how that could be happening. Of course it is far from optimal that we don't know more than just RIP for all five remote CPUs. I am vaguely recalling that we had once decided to pass "async-show-all" uniformly in osstest - Ian, was that perhaps lost (assuming I remember correctly)? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |