[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] CPU intensive VM starves IO intensive VMs


  • To: "Xen Users" <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Tim Wood" <twwood@xxxxxxxxx>
  • Date: Tue, 29 Aug 2006 13:31:33 -0400
  • Delivery-date: Tue, 29 Aug 2006 10:32:26 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=GRZswr2S5orkTTl0oM5KzfmUGxZ6NztxzVk+v1vVdHoBnFWjkHMAlaAS40PVLC2iCQJCU1JWGzTtPgWmPpfe/eWL2bBmVNcdTBEgCG1ecxp+lH8oihJaIIV1/SNfah+xUBMEdzpb7V2q1tU1mQ3HyiOLEDHwPmuiPPuirxIdIrA=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi,
I'm noticing very bad performance when one VM is running a CPU
intensive job and another VM is doing a network intensive task.

For example:

I run Iperf and measure the attained bandwidth with and without
running a CPU Hog application at the same time. The hog app just runs
in an infinite loop performing calculations.

When I do this in Dom0 I get essentially the same bandwidth: 921
Mb/sec without the hog, 920 Mb/sec with.  This is on a gigabit
network, so that seems right.  It makes sense that running the cpu hog
doesn't really affect bandwidth since the IO intensive job shouldn't
require much real computation other than negotiating protocols.

If I do this inside a VM, without the hog I see 447 Mb/sec, and with
the hog I see 109 Mb/sec.  I can understand that there is a difference
between dom0 and the VM without the hog app running due to Xen
overhead, but it doesn't seem right that there should be such a drop
when the hog application is running.

If the hog app is running in a separate VM, performance is even worse
- only 97 Mb/sec.

In all of these examples I am using the sedf scheduler with equal CPU
weights for dom0 and all VMs.  Desite this, in the 2 VM scenario, the
scheduler ends up giving 99% of the cpu to the VM running the hog app,
practically starving the IO intensive VM.

I am aware that the next version of Xen uses the new credit scheduler
- does anyone know if that scheduler tries to deal with these kinds of
issues?  The changes I had heard mostly regarded better supporting
SMP.

-Tim

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.