[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] balloon_mutex lockdep complaint at HVM domain destroy
On 25/05/16 15:30, Ed Swierk wrote: > The following lockdep dump occurs whenever I destroy an HVM domain, on > Linux 4.4 Dom0 with CONFIG_XEN_BALLOON=n on recent stable Xen 4.5. This occurs in dom0? Or the guest that's being destroyed? > Any clues whether this is a real potential deadlock, or how to silence > it if not? It's a bug but... > ====================================================== > [ INFO: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected ] > 4.4.11-grsec #1 Not tainted ^^^^^^^^^^^^ ...this isn't a vanilla kernel? Can you try vanilla 4.6? Because: > IN-RECLAIM_FS-W at: > [<__lock_acquire at lockdep.c:2839>] ffffffff810becc5 > [<lock_acquire at paravirt.h:839>] ffffffff810c0ac9 > [<mutex_lock_nested at mutex.c:526>] ffffffff816d1b4c > [<mn_invl_range_start at gntdev.c:476>] > ffffffff8143c3d4 > [<mn_invl_page at gntdev.c:490>] ffffffff8143c450 > [<__mmu_notifier_invalidate_page at > mmu_notifier.c:183>] ffffffff8119de42 > [<try_to_unmap_one at mmu_notifier.h:275>] > ffffffff811840c2 > [<rmap_walk at rmap.c:1689>] ffffffff81185051 > [<try_to_unmap at rmap.c:1534>] ffffffff81185497 > [<shrink_page_list at vmscan.c:1063>] ffffffff811599b7 > [<shrink_inactive_list at spinlock.h:339>] > ffffffff8115a489 > [<shrink_lruvec at vmscan.c:1942>] ffffffff8115af3a > [<shrink_zone at vmscan.c:2411>] ffffffff8115b1bb > [<kswapd at vmscan.c:3116>] ffffffff8115c1e4 > [<kthread at kthread.c:209>] ffffffff8108eccc > [<ret_from_fork at entry_64.S:890>] ffffffff816d706e We should not be reclaiming pages from a gntdev VMA since it's special (marked as VM_IO). David _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |