[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Bringing up second NIC crashes domU with an Oops
This sounds like bug #183 Ted On Wed, 2005-09-14 at 15:52 -0700, master@xxxxxxxxxxxxxxx wrote: > Am I doing something wrong? If I remove the second bridge (xenbr1), the > domU boots fine. If in place, it causes the following crash and brings > down another domU with it. > > [root@teegeeack xen]# cat theta > kernel ="/boot/vmlinuz-2.6.12-1.1454_FC4xenU" > memory = 128 > name = "theta" > nics = 2 > vif = [ 'bridge=xen-br0', 'bridge=xenbr1' ] > disk = ['file:/theta.img,sda1,w'] > root = "/dev/sda1 ro" > extra = "selinux=0" > > > Bringing up loopback interface: [ OK ] > Bringing up interface eth0: [ OK ] > Bringing up interface eth1: Unable to handle kernel paging request at > virtual address c777c700 > printing eip: > *pde = ma 16eb8067 pa 0001b067 > *pte = ma 00000000 pa 55555000 > Oops: 0002 [#1] > SMP > Modules linked in: dm_mod > CPU: 0 > EIP: 0061:[<c024ba10>] Not tainted VLI > EFLAGS: 00010216 (2.6.12-1.1454_FC4xenU) > EIP is at netif_poll+0x190/0x790 > eax: 00000020 ebx: c006ce80 ecx: c777c04a edx: c777c700 > esi: c777c000 edi: 00000002 ebp: c0497240 esp: c0395f58 > ds: 007b es: 007b ss: 0069 > Process arping (pid: 393, threadinfo=c0395000 task=c7911a80) > Stack: ffffffff 00000112 00000000 c0395f8c 0111a9c5 c0132fe8 00000005 > 00000005 > 00000001 00000002 00000001 00000040 00000000 c0395f8c c0395f8c > 00000000 > 00000001 dead4ead 00000001 c0497000 c0497104 c11064a0 c0395000 > c0260d3c > Call Trace: > [<c0132fe8>] rcu_check_quiescent_state+0x78/0x90 > [<c0260d3c>] net_rx_action+0xdc/0x220 > [<c0124bba>] __do_softirq+0x8a/0x120 > [<c010e52b>] do_softirq+0x8b/0xb0 > ======================= > [<c0124ce5>] local_bh_enable+0x95/0xa0 > [<c025ff88>] dev_queue_xmit+0x168/0x390 > [<c02c8995>] packet_sendmsg+0x235/0x2e0 > [<c025435a>] sock_sendmsg+0x12a/0x170 > [<c0109944>] hypervisor_callback+0x2c/0x34 > [<c0136da0>] autoremove_wake_function+0x0/0x60 > [<c015cb33>] do_no_page+0x2e3/0x3e0 > [<c021f7a0>] copy_from_user+0x60/0xf0 > [<c0253ce8>] move_addr_to_kernel+0x48/0x70 > [<c0255d61>] sys_sendto+0x121/0x160 > [<c011510c>] do_page_fault+0x43c/0x705 > [<c015e24f>] vma_link+0x5f/0x100 > [<c0256906>] sys_socketcall+0x1d6/0x2b0 > [<c010975d>] syscall_call+0x7/0xb > Code: 43 04 00 00 00 00 c7 03 00 00 00 00 39 d1 0f 87 b6 03 00 00 8b 83 a4 > 00 00 00 8b b3 a0 00 00 00 29 f0 83 f8 0f 0f 8e 9f 03 00 00 <c7> 02 01 00 > 00 00 8b 83 ac 00 00 00 c7 40 04 00 00 00 00 8b 83 > <0>Kernel panic - not syncing: Fatal exception in interrupt > [<c011eae3>] panic+0x53/0x240 > [<c010a17a>] die+0x17a/0x190 > [<c0114fa2>] do_page_fault+0x2d2/0x705 > [<c0118102>] recalc_task_prio+0xc2/0x170 > [<c0259721>] kfree_skbmem+0x21/0x30 > [<c027dae0>] ip_rcv+0xe0/0x5c0 > [<c0109b22>] page_fault+0x2e/0x34 > [<c024ba10>] netif_poll+0x190/0x790 > [<c0132fe8>] rcu_check_quiescent_state+0x78/0x90 > [<c0260d3c>] net_rx_action+0xdc/0x220 > [<c0124bba>] __do_softirq+0x8a/0x120 > [<c010e52b>] do_softirq+0x8b/0xb0 > ======================= > [<c0124ce5>] local_bh_enable+0x95/0xa0 > [<c025ff88>] dev_queue_xmit+0x168/0x390 > [<c02c8995>] packet_sendmsg+0x235/0x2e0 > [<c025435a>] sock_sendmsg+0x12a/0x170 > [<c0109944>] hypervisor_callback+0x2c/0x34 > [<c0136da0>] autoremove_wake_function+0x0/0x60 > [<c015cb33>] do_no_page+0x2e3/0x3e0 > [<c021f7a0>] copy_from_user+0x60/0xf0 > [<c0253ce8>] move_addr_to_kernel+0x48/0x70 > [<c0255d61>] sys_sendto+0x121/0x160 > [<c011510c>] do_page_fault+0x43c/0x705 > [<c015e24f>] vma_link+0x5f/0x100 > [<c0256906>] sys_socketcall+0x1d6/0x2b0 > [<c010975d>] syscall_call+0x7/0xb > > and the other running domU also crashes > > [root@teegeeack xen]# xm list > Name Id Mem(MB) CPU VCPU(s) State Time(s) > Domain-0 0 128 0 1 r---- 229.0 > theta 11 127 0 1 ----c 10.7 > xenu 10 127 0 1 ----c 22.2 > > and the second interface (11.1) is attached to the first bridge and not > xenbr1. > > [root@teegeeack xen]# brctl show > bridge name bridge id STP enabled interfaces > xen-br0 8000.0040f4ce392f no eth1 > vif0.0 > vif10.0 > vif11.0 > vif11.1 > xenbr1 8000.000000000000 no can't get port > info: Function not implemented > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |