[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] RE: [Xen-devel] RE: [Patch] Fix IDLE issue with sedf scheduler on IA64


  • To: "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Fri, 15 Jul 2005 11:18:08 +0800
  • Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 15 Jul 2005 03:16:47 +0000
  • List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
  • Thread-index: AcWHkNam4njROoPTSRq2Lw7NMoU4SwAE6cuAAAJhJYAAA5hiwAAW6AAgABzA0JAAF7VhMA==
  • Thread-topic: [Xen-devel] RE: [Patch] Fix IDLE issue with sedf scheduler on IA64

>From: Magenheimer, Dan (HP Labs Fort Collins)
>[mailto:dan.magenheimer@xxxxxx]
>Sent: Thursday, July 14, 2005 11:52 PM
>
>> >I think domain0 only goes in the waitq at one point -- when
>> >it calls pal_halt_light to idle its virtual machine.  This
>> >case could be easily changed (there is already some code there)
>> >to ensure domain0 is always runnable.
>>
>> As I said in another mail, too many pal_halt_light in Dom0's
>> idle loop is even worse than current IDLE domain. (At lease
>> unmodified dom0 can't change that behavior)
>
>You misunderstand what I was suggesting:  When the hypervisor
>recognizes that a domain did a pal_halt_light:
>
>if (current == dom0) {
>       if (current is_the_only_non_idle_domain_on_the_run_queue) {
>               REAL_pal_halt_light;  // processor to low power state
>               return; // back to domain0
>      }
>      else do_sched_op(SCHEDOP_yield);
>}
>else do_sched_op(SCHEDOP_yield);
>
>Dan

Sounds good, and then you need help from scheduler to export that
important runqueue information. Also, for (current != dom0),
SCHEDOP_block is better than SCHEDOP_yield to emulate pal_halt_light...

Thanks,
Kevin

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.