[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 11/13] xen/arm: send IPIs to inject irqs into guest vcpus running on different pcpus



On Thu, 25 Apr 2013, Ian Campbell wrote:
> On Wed, 2013-04-24 at 20:07 +0100, Stefano Stabellini wrote:
> > If we need to inject an irq into a guest that is running on a different
> 
>                                      ^VCPU
> 
> > processor, we shouldn't just enqueue the irq into the lr_pending and
> > inflight lists and wait for something to interrupt the guest execution.
> > Send an IPI to the target pcpu so that Xen can inject the new interrupt
> > returning to guest.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> 
> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> 
> > ---
> >  xen/arch/arm/vgic.c |    2 ++
> >  1 files changed, 2 insertions(+), 0 deletions(-)
> > 
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 8912aab..547a2ab 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -664,6 +664,8 @@ out:
> >      spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> >      /* we have a new higher priority irq, inject it into the guest */
> >      vcpu_unblock(v);
> > +    if ( v != current && v->is_running )
> > +        smp_send_event_check_mask(cpumask_of(v->processor));
> 
> I'm a bit concerned about races here, e.g. where v is busy going to
> sleep.
> I don't know enough about the behaviour of the scheduler and
> vcpu_unblock etc to say, but it worries me...
 
That is that the vcpu_unblock call above is for.
I think that should be enough to make sure that the scheduler doesn't
shelve the vcpu.


> x86 has vcpu_kick which saves v->is_running before the unblock and then
> sends a softirq with much the same checks as you have -- that seems like
> a plausible race avoidance strategy, although whether IPI vs softirq is
> an important distinction I've no idea...

Yes, it looks like the code should be race free, although it might be
possible to improve it by checking is_running before vcpu_unblock to
avoid a useless IPI. I could make that change.
I don't think that the softirq vs IPI thing matters: the purpose is to
force the other vcpu to check for events in the near future, the exact
point in time shouldn't make a difference.


> BTW smp_send_event_check_mask(cpumask_of(v->processor)) is the same as
> smp_send_event_check_cpu(v->processor).

Yeah, I'll make this change.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.