[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Increase txqueuelen of vif devices


  • To: Miroslav Rezanina <mrezanin@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Fri, 22 Oct 2010 11:22:49 +0100
  • Cc:
  • Delivery-date: Fri, 22 Oct 2010 03:23:46 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=dM5lHGskS2zASOaOsQKC2EnzJT2/8nbnzQc6/Ix/BYMuU6KRNmumYwDagGT6f0BfR7 m9Q2nT0DJKaXvE/vYQ333Clp+jXzd9aOSgyTBhSV7/I4BU1K6dNEZt6b+XNwSI4cxZk1 INWqYu77Mtn1uWlBvXUg4PGENUhY9CJdYfkv4=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Actx0xOJ3vThwJUMREWEr6LQ7OOOSQ==
  • Thread-topic: [Xen-devel] Increase txqueuelen of vif devices

The expectation was that domU would push enough receive buffers to dom0 to
avoid packet loss. The txqueuelen is just a fallback for that. Still, yeah,
it could be increased if it improves perf given default domU netfront
behaviour.

 -- Keir


On 22/10/2010 10:35, "Miroslav Rezanina" <mrezanin@xxxxxxxxxx> wrote:

> During performance testing we find out that increasing txqueuelen can increase
> throughput significantly. Following table shows our results:
> 
> txQlen     netperf message size
>         512 byte        4096 byte
> -----------------------------------
> 32      1634.13          8402.32
> 64      1292.05         14198.48
> 128     4142.58         14677.39
> 256     4439.77         14626.80
> 512     5251.48         14809.59
> 1024    4875.96         15358.55
> 
> 
> Based on this result, shouldn't be good idea to change default txqueuelen?
> Physical devices uses txqueuelen of 1000.
> 
> Regard,
> Mirek



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.