[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fwd: [v3 14/15] Update Posted-Interrupts Descriptor during vCPU scheduling

On Thu, 2015-07-02 at 04:32 +0000, Wu, Feng wrote:
> > -----Original Message-----
> > From: Dario Faggioli [mailto:dario.faggioli@xxxxxxxxxx]
> > Sent: Tuesday, June 30, 2015 10:58 AM
> > To: Wu, Feng
> > Cc: xen-devel; keir@xxxxxxx; jbeulich@xxxxxxxx; andrew.cooper3@xxxxxxxxxx;
> > Tian, Kevin; Zhang, Yang Z; george.dunlap@xxxxxxxxxxxxx; Wu, Feng
> > Subject: Re: Fwd: [v3 14/15] Update Posted-Interrupts Descriptor during vCPU
> > scheduling
> > 
> > On Mon, 2015-06-29 at 18:36 +0100, Andrew Cooper wrote:
> > 
> > >
> > > The basic idea here is:
> > > 1. When vCPU's state is RUNSTATE_running,
> > >         - set 'NV' to 'Notification Vector'.
> > >         - Clear 'SN' to accpet PI.
> > >         - set 'NDST' to the right pCPU.
> > > 2. When vCPU's state is RUNSTATE_blocked,
> > >         - set 'NV' to 'Wake-up Vector', so we can wake up the
> > >           related vCPU when posted-interrupt happens for it.
> > >         - Clear 'SN' to accpet PI.
> > > 3. When vCPU's state is RUNSTATE_runnable/RUNSTATE_offline,
> > >         - Set 'SN' to suppress non-urgent interrupts.
> > >           (Current, we only support non-urgent interrupts)
> > >         - Set 'NV' back to 'Notification Vector' if needed.
> > >
> > It might be me, but it feels a bit odd to see RUNSTATE-s being (ab)used
> > directly for this, as it does feel odd to see arch specific code being
> > added in there.
> > 
> > Can't this be done in context_switch(), which is already architecture
> > specific? I was thinking to something very similar to what has been done
> > for PSR, i.e., on x86, put everything in __context_switch().
> > 
> > Looking at who's prev and who's next, and at what pause_flags each has
> > set, you should be able to implement all of the above logic.
> > 
> > Or am I missing something?
> As mentioned in the description of this patch, here we need to do
> something when the vCPU's state is changed, can we get the
> state transition in __context_switch(), such as "running -> blocking"?
Well, in the patch description you mention how you've done it, so of
course it mentions runstates.

That does not necessarily means "we need to do something" in
vcpu_runstate_change(). Actually, that's exactly what I'm asking: can
you check whether this thing that you need doing can be done somewhere
else than in vcpu_runstaete_change() ?

In fact, looking at how, where and what for, runstetes are used, that
really does not feel right, at least to me. What you seem to be
interested is whether a vCPU blocks and/or unblocks. Runstates are an
abstraction, build up on top of (mostly) pause_flags, like _VPF_blocked
(look at how runstate is updated).

I think you should not build on top of such abstraction, but on top of
pause_flags directly. I had a quick look, and it indeed seems to me that
you can get all you need from there too. It might even result in the
code looking simpler (but that's of course hard to tell without actually
trying). In fact, inside the context switching code, you already know
that prev was running so, if it has the proper flag set, it means it's
blocking (i.e., going to RUNSTATE_blocked, in runstates language), if
not, it maybe is being preempted (i.e., going to RUNSTATE_runnable).
Therefore, you can enact all your logic, even without any need to keep
track of the previous runstate, and without needing to build up a full
state machine and looking at all possible transitions.

So, can you have a look at whether that solution can fly? Because, if it
does, I think it would be a lot better.

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.