[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fwd: [v3 14/15] Update Posted-Interrupts Descriptor during vCPU scheduling




> -----Original Message-----
> From: Dario Faggioli [mailto:dario.faggioli@xxxxxxxxxx]
> Sent: Thursday, July 09, 2015 8:42 PM
> To: Wu, Feng
> Cc: George Dunlap; Tian, Kevin; keir@xxxxxxx; andrew.cooper3@xxxxxxxxxx;
> xen-devel; jbeulich@xxxxxxxx; Zhang, Yang Z
> Subject: Re: [Xen-devel] Fwd: [v3 14/15] Update Posted-Interrupts Descriptor
> during vCPU scheduling
> 
> On Thu, 2015-07-09 at 11:38 +0000, Wu, Feng wrote:
> >
> > > -----Original Message-----
> > > From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> George
> > > Dunlap
> 
> > > > Why do you think vcpu_runstaete_change() is not the right place to do
> this?
> > >
> > > Because what the vcpu_runstate_change() function does at the moment is
> > > *update the vcpu runstate variable*.  It doesn't actually change the
> > > runstate -- the runstate is changed in the various bits of code that
> > > call it; and it's not designed to be a generic place to put hooks on
> > > the runstate changing.
> > >
> > > I haven't done a thorough review of this yet, but at least looking
> > > through this patch, and skimming the titles, I don't see anywhere you
> > > handle migration -- what happens if a vcpu that's blocked / offline /
> > > runnable migrates from one cpu to another?  Is the information
> > > updated?
> >
> > Thanks for your review!
> >
> > The migration is handled in arch_pi_desc_update() which is called
> > by vcpu_runstate_change().
> >
> > >
> > > The right thing to do in this situation is either to change
> > > vcpu_runstate_change() so that it is the central place to make all (or
> > > most) hooks happen;
> >
> > Yes, this is my implementation. I think vcpu_runstate_change()
> > is the _central_ place to do things when vCPU state is changed. This
> > makes things clear and simple. I call an arch hooks to update
> > posted-interrupt descriptor in this function.
> >
> Perhaps, one way to double check this line of reasoning (the fact that
> you think this needs to lay on top of runstates, and more specifically
> in that function), would be to come up with some kind of "list of
> requirements", not taking runstates into account.
> 
> I know there is a design document for this series (and I also know I
> could have commented on it earlier, sorry for that), but that itself
> mentions runstates, which does not help.
> 
> What I mean is, can you describe when you need each specific operation
> needs to happen? Something like "descriptor needs to be updated like
> this upon migration", "notification should be disabled when vcpu starts
> running", "notification method should be changed that other way when
> vcpu is preempted", etc.

I cannot see the differences, I think the requirements are clearly listed in
the design doc and the comments of this patch.

> 
> This would help a lot, IMO, figuring out the actual functional
> requirements that needs to be satisfied for things to work well. Once
> that is done, we can go check in the code where is the best place to put
> each call, hook, or whatever.
> 
> 
> Note that I've already tried to infer the above, by looking at the
> patches, and that is making me think that it would be possible to
> implement things in another way. But maybe I'm missing something. So it
> would be really valuable if you, with all your knowledge of how PI
> should work, could do it.

I keep describing how PI works, what the purpose of the two vectors are,
how special they are from the beginning.

Thanks,
Feng


> 
> Thanks and Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.