[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] lazy context switching



Hi Keir, I noticed changeset 027812e4a63c, in which you split off 
context_switch_finalise() from context_switch(). I really appreciate the 
comments you added!

/*
 * Called by the scheduler to switch to another VCPU. On entry, although
 * VCPUF_running is no longer asserted for @prev, its context is still running
 * on the local CPU and is not committed to memory. The local scheduler lock
 * is therefore still held, and interrupts are disabled, because the local CPU
 * is in an inconsistent state.
 * 
 * The callee must ensure that the local CPU is no longer running in @prev's
 * context, and that the context is saved to memory, before returning.
 * Alternatively, if implementing lazy context switching, it suffices to 
ensure
 * that invoking __sync_lazy_execstate() will switch and commit @prev's state.
 */
 extern void context_switch(
     struct vcpu *prev, 
     struct vcpu *next);

PowerPC has a relatively large set of (general-purpose) registers; half are 
volatile and half are not. When we take an exception, we do not save the 
nonvolatiles in the exception handler, since we may be returning to the same 
domain anyways, and in that case C code will ensure that the nonvolatiles are 
correct.

Later on, if it turns out we are switching domains, we save/restore all the 
state we can, then return to the exception handler which saves the old set of 
nonvolatiles and loads the new one. Until that point, some domain state is 
spread arbitrarily across our stack.

That means that context_switch() cannot actually save all of @prev's state to 
memory (and neither can __sync_lazy_execstate()) -- only by returning all the 
way to assembly can we accomplish that.

Thoughts?

-- 
Hollis Blanchard
IBM Linux Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.