[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Dom0 network queues


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Rogério Vinhal Nunes <rogeriovnunes@xxxxxxxxx>
  • Date: Mon, 1 Nov 2010 13:46:53 -0200
  • Delivery-date: Mon, 01 Nov 2010 08:47:45 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=M158dJmp50OwZJ9cks/MRV4Y5i3B3ZcFjW5quOrdupvg2rxotOHgryrpdvVx6VgH0+ vt5BIF0Fedvjfphpt3iRMslNEFicFWseA0e4XE4vz6EEPr7c+XYt+JhnbfJNFX2L1iL1 AZZ86cU8joqpSQ5OKwYtO7Zm72oZKq5X+ybRQ=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hello,

I am studying impacts of virtualization in high-bandwidth low-latency systems such as Datacenters and motivated by a paper from INFOCOM'10 I am trying to do some experiments to study alternatives to this particular system organization.

I did some research on Xen networking configuration, but I still fail to get knowledge of the inner workings, so I thought you could provide me some information.

The kind of experiment that I am trying to do right now is the impact of CPU scheduling on the network of the domUs. As the paper from INFOCOM suggests, when the domU is scheduled off the dom0 queues would get filled with UDP packets that would be transferred in memory-memory above the maximum theoretical network speed for an amount of time, improving UDP average throughput in scheduled off domUs. This was observed in Amazon EC2 instances and I am trying to reproduce in my private system.

I would like to know if you can provide me information on these queues so I could experiment on different sizes to reproduce and exploit this phenomenon. I am using Xen 3.4.0 and the credit sceduler to limit domUs processor share. I have already tried to tune the kernel sysctl variables: net.core.rmem_max, net.core.rmem_default, net.ipv4.udp_mem, net.ipv4.udp_mem_min but none seemed to make significant changes to the original Ubuntu 9.10 setup. I am watching tcpdump logs to see the throughput through time and hoping to identify the phenomenon.

I also saw a discussion last week on the txqueuelen of vifs and the conclusion that these queues may be too small. This is the kind of information I would like to get, which queues can I try to tune to this system, kernel configuration that could help performance and how can I do that. Especially the receive queues.

If my experiments are succesfull I hope my work could be used to improve xen-based systems.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.