[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen bridging issue.



On Mon, 2015-09-07 at 15:50 +0300, johnny Strom wrote:
> 
> Hello
> 
> I sent an email before about bridging not working in domU using Debian 
> 8.1 and XEN 4.4.1.
> 
> It was not the network card "igb" as I first taught.
> 
> I managed to get bridging working in DOMU if is set the limit of cpu's 
> in dom0 to 14, this is from /etc/default/grub
> when it works ok:
> 
> GRUB_CMDLINE_XEN="dom0_max_vcpus=14 dom0_vcpus_pin"
> 
> 
> Is there any known issue/limitations running xen with more with more 
> than 14 CPU cores in dom0?
> 
> 
> The cpu in question is:
> 
> processor       : 16
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 63
> model name      : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
> stepping        : 2
> microcode       : 0x2d
> cpu MHz         : 2298.718
> cache size      : 25600 KB
> physical id     : 0
> siblings        : 17
> core id         : 11
> cpu cores       : 9
> apicid          : 22
> initial apicid  : 22
> fpu             : yes
> fpu_exception   : yes
> cpuid level     : 15
> wp              : yes
> flags           : fpu de tsc msr pae mce cx8 apic sep mca cmov pat 
> clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good 
> nopl nonstop_tsc eagerfpu pni pclmulqdq monitor est ssse3 fma cx16 
> sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand 
> hypervisor lahf_lm abm ida arat epb xsaveopt pln pts dtherm fsgsbase 
> bmi1 avx2 bmi2 erms
> bogomips        : 4597.43
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 46 bits physical, 48 bits virtual
> power management:
> 
> 
> 
> 
> If I set it to 17 in dom0:
> 
> GRUB_CMDLINE_XEN="dom0_max_vcpus=17 dom0_vcpus_pin"
> 
> Then I get this oops whan I try to boot domU with 40 vcpu's.
> 
> [    1.588313] systemd-udevd[255]: starting version 215
> [    1.606097] xen_netfront: Initialising Xen virtual ethernet driver
> [    1.648172] blkfront: xvda2: flush diskcache: enabled; persistent 
> grants: enabled; indirect descriptors: disabled;
> [    1.649190] blkfront: xvda1: flush diskcache: enabled; persistent 
> grants: enabled; indirect descriptors: disabled;
> [    1.649705] Setting capacity to 2097152
> [    1.649716] xvda2: detected capacity change from 0 to 1073741824
> [    1.653540] xen_netfront: can't alloc rx grant refs

The frontend has run out of grant refs, perhaps due to multiqueue support
in the front/backend where I think the number of queues scales with number
of processors.

I've added some relevant maintainers for net{front,back} and grant tables,
plus people who were involved with MQ and the devel list.


> [    1.653547] net eth1: only created 17 queues
> [    1.654027] BUG: unable to handle kernel NULL pointer dereference at 
> 0000000000000018
> [    1.654033] IP: [] netback_changed+0x964/0xee0 
> [xen_netfront]
> [    1.654041] PGD 0
> [    1.654044] Oops: 0000 [#1] SMP
> [    1.654048] Modules linked in: xen_netfront(+) xen_blkfront(+) 
> crct10dif_pclmul crct10dif_common crc32c_intel
> [    1.654057] CPU: 3 PID: 209 Comm: xenwatch Not tainted 3.16.0-4-amd64 
> #1 Debian 3.16.7-ckt11-1+deb8u3
> [    1.654061] task: ffff880faf477370 ti: ffff880faf478000 task.ti: 
> ffff880faf478000
> [    1.654064] RIP: e030:[] [] 
> netback_changed+0x964/0xee0 [xen_netfront]
> [    1.654071] RSP: e02b:ffff880faf47be20  EFLAGS: 00010202
> [    1.654074] RAX: 0000000000000000 RBX: ffff880002a729c0 RCX: 
> 0000000000000001
> [    1.654077] RDX: 000000000066785c RSI: ffff880002a72a58 RDI: 
> 0000000000003f1f
> [    1.654080] RBP: ffff880faa44e000 R08: ffffc90006240000 R09: 
> ffffea0036d3f180
> [    1.654083] R10: 000000000000251e R11: 0000000000000000 R12: 
> ffff880faa44f000
> [    1.654086] R13: ffff880002a72a58 R14: 00000000000729c0 R15: 
> ffff880fab6f4000
> [    1.654093] FS:  0000000000000000(0000) GS:ffff880fb7060000(0000) 
> knlGS:0000000000000000
> [    1.654096] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> [    1.654099] CR2: 0000000000000018 CR3: 0000000001813000 CR4: 
> 0000000000042660
> [    1.654102] Stack:
> [    1.654104]  ffff880faf5aec00 ffff880f0000000f 0000001100000001 
> ffff880faf5aec00
> [    1.654109]  ffff880002a6b041 ffff880002a6af84 00000001af561000 
> 0000001100000001
> [    1.656945]  ffff8800028e9df1 ffff8800028e8880 ffff880faf47beb8 
> ffffffff8135b9e0
> [    1.656945] Call Trace:
> [    1.656945]  [] ? unregister_xenbus_watch+0x220/0x220
> [    1.656945]  [] ? xenwatch_thread+0x98/0x140
> [    1.656945]  [] ? prepare_to_wait_event+0xf0/0xf0
> [    1.656945]  [] ? kthread+0xbd/0xe0
> [    1.656945]  [] ? kthread_create_on_node+0x180/0x180
> [    1.656945]  [] ? ret_from_fork+0x58/0x90
> [    1.656945]  [] ? kthread_create_on_node+0x180/0x180
> [    1.656945] Code: 48 89 c6 e9 bd fd ff ff 48 8b 3c 24 48 c7 c2 b3 52 
> 06 a0 be f4 ff ff ff 31 c0 e8 38 61 2f e1 e9 54 ff ff ff 48 8b 43 20 4c 
> 89 ee <48> 8b 78 18 e8 13 63 2f e1 85 c0 0f 88 b0 fd ff ff 48 8b 43 20
> [    1.656945] RIP  [] netback_changed+0x964/0xee0 
> [xen_netfront]
> [    1.656945]  RSP 
> [    1.656945] CR2: 0000000000000018
> [    1.656945] ---[ end trace d92264e4041d27a1 ]---
> 
> 
> 
> Best regards Johnny
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.