[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [b.g.o.358549] blkfront: Move blkif_interrupt into a tasklet.



W dniu 2011-06-27 19:30, Konrad Rzeszutek Wilk pisze:
> On Mon, Jun 27, 2011 at 05:04:30PM +0200, Marcin Mirosław wrote:
>> W dniu 27.06.2011 16:13, Konrad Rzeszutek Wilk pisze:
>>> I wonder if the reason for this are some other config options. Can you send 
>>> the
>>> full config please?
>>
>> Yes, it's attached. Config is from newer kernel but the same problems
>> appears with it.
>>
>>> When does this happend? From the bug it looks to be just happening during 
>>> bootup, right?
>>
>> Right, bug with "block/blk-core" appears very early. Bug "kernel threads
> 
> OK, that is manifested in the DomU (PV guest) correct?

All problems described by me manifests in DomU. I don't have any access
to dom0.

>> "flush-254:9" does 100%CPU utilization" appears after a couple minutes
>> of work. Sometimes it happens after 15 minuts.
> 
> Ok, but that is in dom0 and I've no idea what gentoo is using as dom0 type
> patches. For that one I would need you to use the lastest Linux kernel 
> (v3.0-rc4)
> and see if you get the same issue.

No, problem with "flush thread" also manifests in DomU. On Dom0 is
probalby used Centos, but don't know nothing more:( .
Those problems appear also on vanilla kernel 2.6.39_rc5-r7 , without any
gentoo patches.
I've tried 3.0.0-rc4, i've got twice time:

[   11.168744] ------------[ cut here ]------------
[   11.168752] WARNING: at block/blk-core.c:239 blk_start_queue+0x1d/0x2d()
[   11.168755] Modules linked in:
[   11.168760] Pid: 0, comm: swapper Not tainted 3.0.0-rc4 #1
[   11.168763] Call Trace:
[   11.168770]  [<c1023387>] ? warn_slowpath_common+0x6a/0x7d
[   11.168774]  [<c11014c7>] ? blk_start_queue+0x1d/0x2d
[   11.168778]  [<c10233a7>] ? warn_slowpath_null+0xd/0x10
[   11.168782]  [<c11014c7>] ? blk_start_queue+0x1d/0x2d
[   11.168788]  [<c1178f6a>] ? kick_pending_request_queues+0x19/0x27
[   11.168792]  [<c1179177>] ? blkif_interrupt+0x1ff/0x216
[   11.168797]  [<c104acf0>] ? handle_irq_event_percpu+0x1d/0x100
[   11.168802]  [<c100a737>] ? sched_clock+0x9/0xd
[   11.168806]  [<c104adec>] ? handle_irq_event+0x19/0x25
[   11.168811]  [<c104c2fb>] ? handle_edge_irq+0x9b/0xb6
[   11.168815]  [<c114c4ef>] ? __xen_evtchn_do_upcall+0xf9/0x181
[   11.168820]  [<c114d25b>] ? xen_evtchn_do_upcall+0x16/0x23
[   11.168824]  [<c1242817>] ? xen_do_upcall+0x7/0xc
[   11.168829]  [<c10013a7>] ? hypercall_page+0x3a7/0x1000
[   11.168833]  [<c1004b00>] ? xen_safe_halt+0xf/0x19
[   11.168838]  [<c100b31a>] ? default_idle+0x29/0x47
[   11.168842]  [<c1005c43>] ? cpu_idle+0x77/0x91
[   11.168846]  [<c134f625>] ? start_kernel+0x2a3/0x2a9
[   11.168850]  [<c13503a2>] ? xen_start_kernel+0x57b/0x583
[   11.168853] ---[ end trace 7846c748fe8d32d4 ]---
[   12.306076] usbcore: registered new interface driver usbfs
[   12.306086] usbcore: registered new interface driver hub
[   12.306604] usbcore: registered new device driver usb
[   12.414643] Adding 614396k swap on /dev/mapper/sda6--7-swap1.
Priority:-1 extents:1 across:614396k
[   12.496706] Adding 511996k swap on /dev/mapper/sda6--7-swap2.
Priority:-2 extents:1 across:511996k
[   14.521926] ip6_tables: (C) 2000-2006 Netfilter Core Team
[   27.803890] ------------[ cut here ]------------
[   27.803898] WARNING: at block/blk-core.c:239 blk_start_queue+0x1d/0x2d()
[   27.803901] Modules linked in: tunnel4 xt_TCPMSS nf_conntrack_ipv6
nf_defrag_ipv6 ip6t_rt xt_state ip6table_mangle iptable_mangle
iptable_nat nf_nat ip6table_filter ip6_tables iptable_filter xt_owner
xt_NFQUEUE xt_multiport xt_mark xt_iprange xt_hashlimit xt_connmark usbcore
[   27.803948] Pid: 0, comm: swapper Tainted: G        W   3.0.0-rc4 #1
[   27.803951] Call Trace:
[   27.803957]  [<c1023387>] ? warn_slowpath_common+0x6a/0x7d
[   27.803962]  [<c11014c7>] ? blk_start_queue+0x1d/0x2d
[   27.803966]  [<c10233a7>] ? warn_slowpath_null+0xd/0x10
[   27.803971]  [<c11014c7>] ? blk_start_queue+0x1d/0x2d
[   27.803977]  [<c1178f6a>] ? kick_pending_request_queues+0x19/0x27
[   27.803982]  [<c1179177>] ? blkif_interrupt+0x1ff/0x216
[   27.803988]  [<c104acf0>] ? handle_irq_event_percpu+0x1d/0x100
[   27.803993]  [<c100a737>] ? sched_clock+0x9/0xd
[   27.803998]  [<c104adec>] ? handle_irq_event+0x19/0x25
[   27.804003]  [<c104c2fb>] ? handle_edge_irq+0x9b/0xb6
[   27.804008]  [<c114c4ef>] ? __xen_evtchn_do_upcall+0xf9/0x181
[   27.804013]  [<c114d25b>] ? xen_evtchn_do_upcall+0x16/0x23
[   27.804019]  [<c1242817>] ? xen_do_upcall+0x7/0xc
[   27.804024]  [<c10013a7>] ? hypercall_page+0x3a7/0x1000
[   27.804029]  [<c1004b00>] ? xen_safe_halt+0xf/0x19
[   27.804033]  [<c100b31a>] ? default_idle+0x29/0x47
[   27.804038]  [<c1005c43>] ? cpu_idle+0x77/0x91
[   27.804043]  [<c134f625>] ? start_kernel+0x2a3/0x2a9
[   27.804048]  [<c13503a2>] ? xen_start_kernel+0x57b/0x583
[   27.804051] ---[ end trace 7846c748fe8d32d5 ]---

Full message.log is attached to gentoo's bug.
After 10-20 minutes and somi disk activity, kernel's thread flush254:9
tries eat cpu (and load becomes over 10). When i run "sync" from console
i've never goet console back. Sync doesn't finish.
For me nothings changed in newest kernel.

Regards!

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.