[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock



On Tue, Jul 20, 2010 at 09:17:26AM -0700, Jeremy Fitzhardinge wrote:
> On 07/20/2010 04:53 AM, Pasi Kärkkäinen wrote:
> > On Mon, Jun 07, 2010 at 09:50:13AM -0700, Jeremy Fitzhardinge wrote:
> >   
> >> On 06/07/2010 05:58 AM, Pasi Kärkkäinen wrote:
> >>     
> >>> On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote:
> >>>   
> >>>       
> >>>> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote:
> >>>>     
> >>>>         
> >>>>> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote:
> >>>>>       
> >>>>>           
> >>>>>> Hello,
> >>>>>>
> >>>>>> I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, 
> >>>>>> with Xen 4.0.0,
> >>>>>> and I got this:
> >>>>>>
> >>>>>> http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt
> >>>>>>   
> >>>>>>         
> >>>>>>             
> >>>>> Does this help?
> >>>>>
> >>>>>       
> >>>>>           
> >>>> It gave failing hunks so I had to manually apply it to 2.6.32.15, 
> >>>> but it seems to fix that issue. No "unsafe lock order" messages anymore.
> >>>>
> >>>>     
> >>>>         
> >>> Hmm.. it seems I still get this:
> >>>   
> >>>       
> >> OK, thanks.  Let me look at it; that was a first cut patch I did the
> >> other day when I noticed the problem, but I hadn't got around to testing
> >> it myself.
> >>
> >>     
> > I just tried the latest xen/stable-2.6.32.16 (as of today), 
> > without any additional patches, and I get this:
> >   
> 
> Is that new?
> 

I think it's the same as earlier..

-- Pasi

> 
> >
> > device vif1.0 entered promiscuous mode
> > virbr0: topology change detected, propagating
> > virbr0: port 1(vif1.0) entering forwarding state
> >   alloc irq_desc for 1242 on node 0
> >   alloc kstat_irqs on node 0
> >   alloc irq_desc for 1241 on node 0
> >   alloc kstat_irqs on node 0
> >   alloc irq_desc for 1240 on node 0
> >   alloc kstat_irqs on node 0
> > blkback: ring-ref 8, event-channel 8, protocol 1 (x86_64-abi)
> >   alloc irq_desc for 1239 on node 0
> >   alloc kstat_irqs on node 0
> > vif1.0: no IPv6 routers present
> >   alloc irq_desc for 1238 on node 0
> >   alloc kstat_irqs on node 0
> > ------------[ cut here ]------------
> > WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135()
> > Hardware name: X7SB4/E
> > Modules linked in: ipt_MASQUERADE iptable_nat nf_nat bridge stp llc sunrpc 
> > ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 xen_gntdev 
> > xen_evtchn xenfs e1000e shpchp i2c_i801 pcspkr iTCO_wdt iTCO_vendor_support 
> > serio_raw joydev floppy usb_storage video output aic79xx scsi_transport_spi 
> > radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: 
> > scsi_wait_scan]
> > Pid: 23, comm: xenwatch Not tainted 2.6.32.16 #6
> > Call Trace:
> >  <IRQ>  [<ffffffff81059c41>] warn_slowpath_common+0x7c/0x94
> >  [<ffffffff8147896b>] ? _spin_unlock_irq+0x30/0x3c
> >  [<ffffffff81059c6d>] warn_slowpath_null+0x14/0x16
> >  [<ffffffff8108b186>] trace_hardirqs_on_caller+0xb7/0x135
> >  [<ffffffff8108b211>] trace_hardirqs_on+0xd/0xf
> >  [<ffffffff8147896b>] _spin_unlock_irq+0x30/0x3c
> >  [<ffffffff812c15d7>] add_to_net_schedule_list_tail+0x92/0x9b
> >  [<ffffffff812c1618>] netif_be_int+0x38/0xcd
> >  [<ffffffff810b8144>] handle_IRQ_event+0x53/0x119
> >  [<ffffffff810ba0e6>] handle_level_irq+0x7d/0xdf
> >  [<ffffffff812b6f45>] __xen_evtchn_do_upcall+0xe1/0x16e
> >  [<ffffffff812b74b8>] xen_evtchn_do_upcall+0x37/0x4c
> >  [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30
> >  <EOI>  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b
> >  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b
> >  [<ffffffff812b9f9b>] ? notify_remote_via_evtchn+0x1e/0x44
> >  [<ffffffff814776a1>] ? __mutex_lock_common+0x36a/0x37b
> >  [<ffffffff812ba8b8>] ? xs_talkv+0x5c/0x174
> >  [<ffffffff812ba30c>] ? xb_write+0x16e/0x18a
> >  [<ffffffff812ba8c6>] ? xs_talkv+0x6a/0x174
> >  [<ffffffff81242b86>] ? kasprintf+0x38/0x3a
> >  [<ffffffff812bab15>] ? xs_single+0x3a/0x3c
> >  [<ffffffff812bb0f0>] ? xenbus_read+0x42/0x5b
> >  [<ffffffff812c3de4>] ? frontend_changed+0x655/0x681
> >  [<ffffffff812bc0e7>] ? xenbus_otherend_changed+0xe9/0x176
> >  [<ffffffff8100f34f>] ? xen_restore_fl_direct_end+0x0/0x1
> >  [<ffffffff8108d94e>] ? lock_release+0x198/0x1a5
> >  [<ffffffff812bc712>] ? frontend_changed+0x10/0x12
> >  [<ffffffff812ba63d>] ? xenwatch_thread+0x111/0x14c
> >  [<ffffffff81079d7a>] ? autoremove_wake_function+0x0/0x39
> >  [<ffffffff812ba52c>] ? xenwatch_thread+0x0/0x14c
> >  [<ffffffff81079aa8>] ? kthread+0x7f/0x87
> >  [<ffffffff81013dea>] ? child_rip+0xa/0x20
> >  [<ffffffff81013750>] ? restore_args+0x0/0x30
> >  [<ffffffff81013de0>] ? child_rip+0x0/0x20
> > ---[ end trace b036c0423b0ee26a ]---
> >   alloc irq_desc for 1237 on node 0
> >   alloc kstat_irqs on node 0
> > device vif2.0 entered promiscuous mode
> >
> >
> > -- Pasi
> >
> >   
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.