[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] sedf testing: volunteers please


  • To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Stephan Diestelhorst <sd386@xxxxxxxxxxxx>
  • Date: Tue, 28 Jun 2005 09:55:40 +0100
  • Delivery-date: Tue, 28 Jun 2005 08:54:49 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Xuehai,

  your idea is about right. Things to notice are:
How much slice did you give to dom0?
If dom0 gets 75% of the cpu, then the other two domains will share the
remaining 20%
100%-75% - 5% (a small reserve)) in the requsted ratio.
That means they will get 2/10 * 20% = 4% and 8/10 * 20% = 16% of the CPU.
This is guranteed and they won't exceed their reservation because they
don't have the extra-flag specified!
You could either:
  -reduce the reservation fordom0
or
  -use the extraflag for your domains vm1 and vm2 to givethem any
remaining time (which drives them in weighted extratime mode)
  xm sedf vm1 0 0 0 1 2

Hope that helps,
  Stephan

>Stephan,
>
>I enabled the sedf scheduler by applying the patch to Xen testing tree not
>the unstable tree.
>
>Then I did the following test. I started two user domains (named "vm1" and
>"vm2" respectively). I made the following sedf configurations:
>
>xm sedf vm1 0 0 0 0 2
>xm sedf vm2 0 0 0 0 8
>
>My intention is to have vm1 reserve 20% of the available cpu and vm2
>reserve the rest of 80% (please correct me if my understanding about
>sedf here is wrong).
>
>Then I start "slurp" job in both domains and it will print out the cpu
>share continuously. To my surprise, vm1 takes around 4% of cpu and vm2
>occpuies around 17% cpu. I was expecting they share the cpu something like
>20% and 80% though the ratio of 4% and 17% is similar as that of 20% and
>80%. BTW, dom0 didn't run any extra job when I ran the test.
>
>Could you please let me know why only 21% (4%+17%) cpu is given to both
>vm1 and vm2 not 100%-% taken by dom0?
>
>Thanks.
>
>Xuehai
>
>On Wed, 18 May 2005, Stephan Diestelhorst wrote:
>
>  
>
>>The new sedf scheduler has been in the xen-unstable reopository for a
>>couple of days now. As it may become the default scheduler soon, any
>>testing now is much appreciated!
>>
>>Quick summary can be found in docs/misc/sedf_scheduler_mini-HOWTO.txt
>>
>>Future directions:
>>-effective scheduling of SMP-guests
>>  -clever SMP locking in domains (on the way)
>>  -timeslice donating (under construction)
>>  -identifying gangs and schedule them together
>>  -balancing of domains/ VCPUs
>>
>>Any comments/wishes/ideas/... on that are welcome!
>>
>>Best,
>>  Stephan Diestelhorst
>>
>>
>>
>>_______________________________________________
>>Xen-devel mailing list
>>Xen-devel@xxxxxxxxxxxxxxxxxxx
>>http://lists.xensource.com/xen-devel
>>
>>
>>    
>>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.