[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] [Xen-devel] [BUG] 16 vcpus + 2 vif bridge = issue ?



On Thu, 2015-09-24 at 23:49 +0300, Roman Shubovich wrote:
> here is my configs and log/tcpdump/ping/dmesg output
> i see no visible changes in all of the logs output, but 16vcpus domU
> didn't work properly
> 
> 
> and one more thing:
> when i try to start domU with vcpus more than 16 then domU won't start at
> all

This looks similar to an issue report on xen-users earlier in the week.
Please check the list archives for the thread "Xen bridging issue", IIRC
patches were proposed but I'm not sure what the status of those is.

Thanks,
Ian.

> 
> vcpus=17
> 
> [    0.896865] xen_netfront: can't alloc rx grant refs
> [    0.896872] net eth1: only created 14 queues
> [    0.897084] BUG: unable to handle kernel NULL pointer dereference at
> 0000000000000018
> [    0.897090] IP: [<ffffffff81687f42>] netback_changed+0x952/0xfa0
> [    0.897099] PGD 0
> [    0.897103] Oops: 0000 [#1] SMP
> [    0.897107] Modules linked in:
> [    0.897111] CPU: 2 PID: 129 Comm: xenwatch Not tainted 3.18.21 #1
> [    0.897114] task: ffff88007b192800 ti: ffff88007b284000 task.ti:
> ffff88007b284000
> [    0.897117] RIP: e030:[<ffffffff81687f42>]  [<ffffffff81687f42>]
> netback_changed+0x952/0xfa0
> [    0.897123] RSP: e02b:ffff88007b287d78  EFLAGS: 00010202
> [    0.897125] RAX: 0000000000000000 RBX: 00000000000729c0 RCX:
> 0000000000000001
> [    0.897128] RDX: 0000000001555da0 RSI: ffff88001ee72a58 RDI:
> 0000000000003f1f
> [    0.897131] RBP: ffff88007b287e08 R08: ffffc90000340000 R09:
> 0000000000000001
> [    0.897134] R10: ffffea00007b5580 R11: ffffea0001ec8000 R12:
> ffff88001ee729c0
> [    0.897137] R13: ffff88001ed54000 R14: ffff88001ee72a58 R15:
> ffff88001ed55000
> [    0.897143] FS:  0000000000000000(0000) GS:ffff88007cb00000(0000)
> knlGS:0000000000000000
> [    0.897146] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> [    0.897148] CR2: 0000000000000018 CR3: 000000000201e000 CR4:
> 0000000000042660
> [    0.900287] Stack:
> [    0.900287]  ffff88007b287df8 ffff88001ee6af84 ffff88001ee6b041
> ffff88007b321000
> [    0.900287]  ffff88007b2c2000 ffff88000000000f ffff88007b321000
> ffff880000000011
> [    0.900287]  000000017cb13300 000000015d13c5e0 0000002800000001
> ffff88005d13c631
> [    0.900287] Call Trace:
> [    0.900287]  [<ffffffff81460e4d>] xenbus_otherend_changed+0xad/0x110
> [    0.900287]  [<ffffffff81460210>] ? xenwatch_thread+0xb0/0x160
> [    0.900287]  [<ffffffff81460160>] ?
> unregister_xenbus_watch+0x220/0x220
> [    0.900287]  [<ffffffff814632a3>] backend_changed+0x13/0x20
> [    0.900287]  [<ffffffff814601ff>] xenwatch_thread+0x9f/0x160
> [    0.900287]  [<ffffffff818d1bf0>] ?
> _raw_spin_unlock_irqrestore+0x20/0x40
> [    0.900287]  [<ffffffff810af870>] ? prepare_to_wait_event+0x110/0x110
> [    0.900287]  [<ffffffff8108e889>] kthread+0xc9/0xe0
> [    0.900287]  [<ffffffff8108e7c0>] ? kthread_create_on_node+0x180/0x180
> [    0.900287]  [<ffffffff818d23d8>] ret_from_fork+0x58/0x90
> [    0.900287]  [<ffffffff8108e7c0>] ? kthread_create_on_node+0x180/0x180
> [    0.900287] Code: c6 e9 d6 fd ff ff 48 8b 7d a0 48 c7 c2 db bb df 81
> be f4 ff ff ff 31 c0 4c 8b 7d 90 e8 48 65 dd ff eb 8f 49 8b 44 24 20 4c
> 89 f6 <48> 8b 78 18 e8 55 67 dd ff 85 c0 0f 88 ca fd ff ff 49 8b 44 24
> [    0.900287] RIP  [<ffffffff81687f42>] netback_changed+0x952/0xfa0
> [    0.900287]  RSP <ffff88007b287d78>
> [    0.900287] CR2: 0000000000000018
> [    0.900287] ---[ end trace 413a209251215943 ]---
> 
> 
> i have custom kernel and i think i have some misconfigured options
> at the moment i have properly working domU only if vcpu number is <= 15
> 
> 
> 2015-09-24 12:09 GMT+03:00 Ian Campbell <ian.campbell@xxxxxxxxxx>:
> > On Thu, 2015-09-24 at 09:56 +0100, Ian Campbell wrote:
> > > On Thu, 2015-09-24 at 03:16 +0300, Roman Shubovich wrote:
> > > > hi
> > > >
> > > > i have physical server with 40 cpu cores
> > > > and i need to create a hvm domu with at least 16 vcpus and 2
> > network
> > > > bridges
> > > > when i start that domu i have some not understable issue - the
> > second
> > > > bridge has no traffic from network (works only first interface -
> > first
> > > > declared in config file). i can see traffic with tcpdum on dom0,
> > but
> > > > not
> > > > on vif interface that has been created by domu startup script.
> > > >
> > > > when i reduce number of vcpu to 15 or less then bridges works fine
> > >
> > > Please post some logs:
> > 
> > Also I didn't notice this went to xen-devel@, which is a list for
> > _development_ of Xen. User support and configuration issues belong on
> > xen
> > -users@.
> > 
> > If I had noticed this I would have added -users to the CC and moved 
> > -devel
> > to BCC in my previous reply. If you see this before you reply to my
> > previous mail please adjust the Cc's appropriately, otherwise please
> > try
> > and remember to use the appropriate list next time.
> > 
> > Thanks,
> > Ian.
> > 
> > >  * dmesg of both host and guest
> > >  * output of these commands in dom0 while the guest is running with 2
> > > vifs
> > >    configured (but only one working):
> > >     * "brctl show"
> > >     * "ifconfig -a"
> > >  * The output of "ifconfig -a" within the guest in the same
> > > configuration.
> > >  * The guest configuration file you are using.
> > >
> > > Thanks.
> > > Ian.
> > >
> > > >
> > > > system:
> > > > dom0 ubuntu 14.04.03 kernel 3.18.21
> > > > domu ubuntu 14.04.03 kernel 3.18.21
> > > > tried xen:
> > > > xen 4.4
> > > > xen 4.5
> > > > xen 4.6
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@xxxxxxxxxxxxx
> > > > http://lists.xen.org/xen-devel
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@xxxxxxxxxxxxx
> > > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.