[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Bug in pirq_guest_unbind.



I've been able to generate this error by making the
xen_unmap_pirq request happen a bit too early in the shutdown
process of the guest. This does not happen with xm unless you
move some functions around. But nonethless it looks like
a racing issue. I was thinking to work on this at some point
but I still have work on the dom0 to finish up.So
if anybody wants to look at this I can give more details.


856] switch: port 2(vif1008.0) entering disabled state
[  792.145541] device vif1008.0 left promiscuous mode
[  79[  792.169512] pciback pci-1008-0: fe state changed 6
(XEN) irq.c:1514: dom1008: pirq 16 not mapped
(XEN) irq.c:1514: dom1008: pirq 16 not mapped
(XEN) irq.c:1522: dom1008: forcing unbind of pirq 17
(XEN) ----[ Xen-3.5-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c48011da93>] che021ff5fc78   rsp: ffff83021ff5fc78   
r8:  0000000040caba40
(XEN) r9:  00000000deadbeef   r10: ffff82c4801ff7a0   r11: 0000000000000282
(XEN) r12: 0000000000000286   r13: ffff83821ff7f834   r14: 0000000000000044
(XEN) r15: ffff83821ff7f800   cr0: 000000008005003b   cr4: 00000000000006f0
(XEN) cr3: 000000020b1b4000   cr2: ffff83821ff7f838
(XEN) ds: 0000   es: 0000   fs: 0063   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83021ff5fc78:
(XEN)    ffff83021ff5fc98 ffff82c48011ddbe ffff83020ac62000 00000000ffffffef
(XEN)    ffff83021ff5fce8 ffff82c480154f98 ffff83021ff5fce8 0000000000000000
(XEN)    0000000000000000 ffff83020ac62000 000000000000001a 0000000000000011
(XEN)    ffff830219ce6280 0000000000000000 ffff83021ff5fd18 ffff82c480156de6
(XEN)    ffff83020ac62000 000000000000001a ffff82c4801f9ce0 ffff830219ce6280
(XEN)    ffff83021ff5fd78 ffff82c480106e51 ffff8300cfe84000 0000000000000270
(XEN)    ffff83020ac62180 0000000000000010 ffff83021ff5fd68 000000000000001a
(XEN)    ffff83020ac62000 ffff83020ac62180 0000000000305000 0000000000000004
(XEN)    ffff83021ff5fda8 ffff82c4801070d8 0000000000000296 ffff83020ac62000
(XEN)    00000000ffffffea 0000000040caba10 ffff83021ff5fdc8 ffff82c480106340
(XEN)    fffffffffffffff3 0000000040caba40 ffff83021ff5ff08 ffff82c480104c53
(XEN)    ffff83021ff5fde8 0000000000000000 ffff83021ff5fe38 ffff82c48015fcf8
(XEN)    ffff83021ff5fe08 0000000180148c7d 0000000000000000 ffff82c480107688
(XEN)    ffff82c480260008 ffffc9000003b120 ffffc9000003b120 0000000000000003
(XEN)    ffff83021ff5ff08 ffff82c480113923 0000000500000002 00000000000003f0
(XEN)    000000000161f6c0 0000000040cabd20 0000000000000000 00007f8f264c60a5
(XEN)    37a98d4a00000001 0000000000000000 00000000f666c676 0000000000000000
(XEN)    00007f8f263a4bb8 00000000000003f0 0000000040cabb90 00007f8f264cb4c2
(XEN)    0000000000000000 000000000161bec0 00007f8f25af6a10 00000000000003f0
(XEN)    ffff83021ff5fee8 ffff8300cfd24000 ffff8800050269c0 0000000040caba10
(XEN) Xen call trace:
(XEN)    [<ffff82c48011da93>] check_lock+0x1b/0x45
(XEN)    [<ffff82c48011ddbe>] _spin_lock_irqsave+0x21/0x67
(XEN)    [<ffff82c480154f98>] domain_spin_lock_irq_desc+0x49/0x9b
(XEN)    [<ffff82c480156de6>] pirq_guest_unbind+0x5c/0x10a
(XEN)    [<ffff82c480106e51>] __evtchn_close+0xd5/0x309
(XEN)    [<ffff82c4801070d8>] evtchn_destroy+0x53/0xeb
(XEN)    [<ffff82c480106340>] domain_kill+0x6c/0xe7
(XEN)    [<ffff82c480104c53>] do_domctl+0xa26/0x11e3
(XEN)    [<ffff82c4801eb1dc>] syscall_enter+0x10c/0x166
(XEN)    
(XEN) Pagetable walk from ffff83821ff7f838:
(XEN)  L4[0x107] = 0000000000000000 ffffffffffffffff
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 2:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: ffff83821ff7f838
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.