[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] netfront/netback multiqueue exhausting grants




On 2016/1/21 9:17, David Vrabel wrote:
On 21/01/16 12:19, Ian Campbell wrote:
On Thu, 2016-01-21 at 10:56 +0000, David Vrabel wrote:
On 20/01/16 12:23, Ian Campbell wrote:
There have been a few reports recently[0] which relate to a failure of
netfront to allocate sufficient grant refs for all the queues:

[    0.533589] xen_netfront: can't alloc rx grant refs
[    0.533612] net eth0: only created 31 queues

Which can be worked around by increasing the number of grants on the
hypervisor command line or by limiting the number of queues permitted
by
either back or front using a module param (which was broken but is now
fixed on both sides, but I'm not sure it has been backported everywhere
such that it is a reliable thing to always tell users as a workaround).

Is there any plan to do anything about the default/out of the box
experience? Either limiting the number of queues or making both ends
cope
more gracefully with failure to create some queues (or both) might be
sufficient?

I think the crash after the above in the first link at [0] is fixed? I
think that was the purpose of ca88ea1247df "xen-netfront: update
num_queues
to real created" which was in 4.3.
I think the correct solution is to increase the default maximum grant
table size.
That could well make sense, but then there will just be another higher
limit, so we should perhaps do both.

i.e. factoring in:
  * performance i.e. ability for N queues to saturate whatever sort of link
    contemporary Linux can saturate these days, plus some headroom, or
    whatever other ceiling seems sensible)
  * grant table resource consumption i.e. (sensible max number of blks * nr
    gnts per blk + sensible max number of vifs * nr gnts per vif + other
    devs needs) < per guest grant limit) to pick both the default gnttab
    size and the default max queuers.
Yes.

Would it waste lots of resources in the case where guest vif has lots of queue but no network load? Here is an example of gntref consumed by vif,
Dom0 20vcpu, domu 20vcpu,
one vif would consumes 20*256*2=10240 gntref.
If setting the maximum grant table size to 64pages(default value of xen is 32pages now?), then only 3 vif is supported in guest. Even blk isn't taken account in, and also blk multi-page ring feature.

Thanks
Annie

Although, unless you're using the not-yet-applied per-cpu rwlock patches
multiqueue is terrible on many (multisocket) systems and the number of
queue should be limited in netback to 4 or even just 2.
Presumably the guest can't tell, so it can't do this.

I think when you say "terrible" you don't mean "worse than without mq" but
rather "not realising the expected gains from a larger nunber of queues",
right?.
Malcolm did the analysis but if I remember correctly, 8 queues performed
about the same as 1 queue and 16 were worse than 1 queue.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.