[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] CPU intensive VM starves IO intensive VMs


  • To: "Tim Wood" <twwood@xxxxxxxxx>
  • From: "Diwaker Gupta" <diwaker.lists@xxxxxxxxx>
  • Date: Fri, 1 Sep 2006 16:58:02 -0700
  • Cc: Xen Users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 01 Sep 2006 16:58:52 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=FTb/gi0q/0lhcHmKMA4DvWa/9oS7U7EupKerkD62fGgv4vCGXrjGDdFtwEAr7mCVG4ETGhAixjgNzDiL677TtrQKE7uXtw48FrOHEo5AdJVOFpKx8X7aQUkZENQlqLAl3a/xQ4CmkSvltyjwwkM1iGfcltTZogyCqdvmjYTnvKA=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

In all of these examples I am using the sedf scheduler with equal CPU
weights for dom0 and all VMs.  Desite this, in the 2 VM scenario, the
scheduler ends up giving 99% of the cpu to the VM running the hog app,
practically starving the IO intensive VM.

Are you running in work conserving mode or non-work conserving mode?
(the extra flag is set to 0 for the latter). I have done similar
experiments with good results in non work conserving mode.
--
Web/Blog/Gallery: http://floatingsun.net/blog

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.