[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Xen-users] Xen bridging issue.



On Tue, Sep 08, 2015 at 09:58:59AM +0100, Ian Campbell wrote:
> On Mon, 2015-09-07 at 15:50 +0300, johnny Strom wrote:
> > 
> > Hello
> > 
> > I sent an email before about bridging not working in domU using Debian 
> > 8.1 and XEN 4.4.1.
> > 
> > It was not the network card "igb" as I first taught.
> > 
> > I managed to get bridging working in DOMU if is set the limit of cpu's 
> > in dom0 to 14, this is from /etc/default/grub
> > when it works ok:
> > 
> > GRUB_CMDLINE_XEN="dom0_max_vcpus=14 dom0_vcpus_pin"
> > 
> > 
> > Is there any known issue/limitations running xen with more with more 
> > than 14 CPU cores in dom0?
> > 
> > 
> > The cpu in question is:
> > 
> > processor       : 16
> > vendor_id       : GenuineIntel
> > cpu family      : 6
> > model           : 63
> > model name      : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
> > stepping        : 2
> > microcode       : 0x2d
> > cpu MHz         : 2298.718
> > cache size      : 25600 KB
> > physical id     : 0
> > siblings        : 17
> > core id         : 11
> > cpu cores       : 9
> > apicid          : 22
> > initial apicid  : 22
> > fpu             : yes
> > fpu_exception   : yes
> > cpuid level     : 15
> > wp              : yes
> > flags           : fpu de tsc msr pae mce cx8 apic sep mca cmov pat 
> > clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good 
> > nopl nonstop_tsc eagerfpu pni pclmulqdq monitor est ssse3 fma cx16 
> > sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand 
> > hypervisor lahf_lm abm ida arat epb xsaveopt pln pts dtherm fsgsbase 
> > bmi1 avx2 bmi2 erms
> > bogomips        : 4597.43
> > clflush size    : 64
> > cache_alignment : 64
> > address sizes   : 46 bits physical, 48 bits virtual
> > power management:
> > 
> > 
> > 
> > 
> > If I set it to 17 in dom0:
> > 
> > GRUB_CMDLINE_XEN="dom0_max_vcpus=17 dom0_vcpus_pin"
> > 
> > Then I get this oops whan I try to boot domU with 40 vcpu's.
> > 
> > [    1.588313] systemd-udevd[255]: starting version 215
> > [    1.606097] xen_netfront: Initialising Xen virtual ethernet driver
> > [    1.648172] blkfront: xvda2: flush diskcache: enabled; persistent 
> > grants: enabled; indirect descriptors: disabled;
> > [    1.649190] blkfront: xvda1: flush diskcache: enabled; persistent 
> > grants: enabled; indirect descriptors: disabled;
> > [    1.649705] Setting capacity to 2097152
> > [    1.649716] xvda2: detected capacity change from 0 to 1073741824
> > [    1.653540] xen_netfront: can't alloc rx grant refs
> 
> The frontend has run out of grant refs, perhaps due to multiqueue support
> in the front/backend where I think the number of queues scales with number
> of processors.
> 

The default number of queues would be number of _backend_ processors.
Xen command line indicates 17 Dom0 vcpus, which isn't too large I think.

Can you check in xenstore what the value of multi-queue-max-queues is?
Use xenstore-ls /local/domain/$DOMID/ when the guest is still around.

> I've added some relevant maintainers for net{front,back} and grant tables,
> plus people who were involved with MQ and the devel list.
> 
> 
> > [    1.653547] net eth1: only created 17 queues

This indicates it only created 16 queues.  And there seems to be a bug
in code.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.