[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] balloon_mutex lockdep complaint at HVM domain destroy



On Wed, May 25, 2016 at 9:58 AM, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
> This occurs in dom0?  Or the guest that's being destroyed?

The lockdep warning comes from dom0 when the HVM guest is being destroyed.

> It's a bug but...
>
>> ======================================================
>> [ INFO: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected ]
>> 4.4.11-grsec #1 Not tainted
>   ^^^^^^^^^^^^
> ...this isn't a vanilla kernel?  Can you try vanilla 4.6?

I tried vanilla 4.4.11, and get the same result. I'm having trouble
booting 4.6.0 at all--must be another regression in the early xen boot
code.

> Because:
>
>>    IN-RECLAIM_FS-W at:
>>                        [<__lock_acquire at lockdep.c:2839>] ffffffff810becc5
>>                        [<lock_acquire at paravirt.h:839>] ffffffff810c0ac9
>>                        [<mutex_lock_nested at mutex.c:526>] ffffffff816d1b4c
>>                        [<mn_invl_range_start at gntdev.c:476>] 
>> ffffffff8143c3d4
>>                        [<mn_invl_page at gntdev.c:490>] ffffffff8143c450
>>                        [<__mmu_notifier_invalidate_page at 
>> mmu_notifier.c:183>] ffffffff8119de42
>>                        [<try_to_unmap_one at mmu_notifier.h:275>] 
>> ffffffff811840c2
>>                        [<rmap_walk at rmap.c:1689>] ffffffff81185051
>>                        [<try_to_unmap at rmap.c:1534>] ffffffff81185497
>>                        [<shrink_page_list at vmscan.c:1063>] ffffffff811599b7
>>                        [<shrink_inactive_list at spinlock.h:339>] 
>> ffffffff8115a489
>>                        [<shrink_lruvec at vmscan.c:1942>] ffffffff8115af3a
>>                        [<shrink_zone at vmscan.c:2411>] ffffffff8115b1bb
>>                        [<kswapd at vmscan.c:3116>] ffffffff8115c1e4
>>                        [<kthread at kthread.c:209>] ffffffff8108eccc
>>                        [<ret_from_fork at entry_64.S:890>] ffffffff816d706e
>
> We should not be reclaiming pages from a gntdev VMA since it's special
> (marked as VM_IO).

Can you suggest any printks for me to add that might help isolate the issue?

--Ed

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.