[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 08/10] xen/arm: second irq injection while the first irq is still inflight



On Mon, 2014-03-24 at 12:54 +0000, Stefano Stabellini wrote:
> On Fri, 21 Mar 2014, Ian Campbell wrote:
> > On Fri, 2014-03-21 at 16:46 +0000, Stefano Stabellini wrote:
> > > On Fri, 21 Mar 2014, Ian Campbell wrote:
> > > > On Wed, 2014-03-19 at 12:32 +0000, Stefano Stabellini wrote:
> > > > > Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second 
> > > > > irq
> > > > > while the first one is still active.
> > > > > If the first irq is already pending (not active), just clear
> > > > > GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
> > > > > already visible by the guest.
> > > > > If the irq has already been EOI'ed then just clear the GICH_LR right
> > > > > away and move the interrupt to lr_pending so that it is going to be
> > > > > reinjected by gic_restore_pending_irqs on return to guest.
> > > > > 
> > > > > If the target cpu is not the current cpu, then set 
> > > > > GIC_IRQ_GUEST_PENDING
> > > > > and send an SGI. The target cpu is going to be interrupted and call
> > > > > gic_clear_lrs, that is going to take the same actions.
> > > > > 
> > > > > Unify the inflight and non-inflight code paths in 
> > > > > vgic_vcpu_inject_irq.
> > > > > 
> > > > > Do not call vgic_vcpu_inject_irq from gic_inject if
> > > > > evtchn_upcall_pending is set. If we remove that call, we don't need to
> > > > > special case evtchn_irq in vgic_vcpu_inject_irq anymore.
> > > > 
> > > > That's the consequence of removing it, but what is the rationale for why
> > > > it is OK?
> > > 
> > > Because with this patch we are able to inject a second interrupt while
> > > the first one is still in progress.
> > 
> > IOW gic_inject of the evtchn just works even if the evtchn is already
> > pending?
> >
> > Don't we then have a problem with a potentialy spurious evtchn upcall?
> > 
> > Event chn A raised
> >  -> IRQ inject
> >     -> GUest takes IRQ
> >         -> Guest enters evtchn handling loop
> >             -> handles A
> > Event chn B raised
> >  -> IRQ becomes pendign again
> >             -> handles B
> >          -> Finishes evtchn handling loop
> >          -> unmasks interrupt
> >     -> Guest takes anotheer IRQ
> >           -> Nothing to do (B already dealt with)
> > 
> > ?
> 
> No, because vcpu_mark_events_pending only calls vgic_vcpu_inject_irq if
> evtchn_upcall_pending is not set.
> 
> We only inject a second evtchn_irq if the guest actually needs a second
> notification.

OK, thanks for the explanation.

> > > > > We also need to force the first injection of evtchn_irq (call
> > > > > gic_vcpu_inject_irq) from vgic_enable_irqs because 
> > > > > evtchn_upcall_pending
> > > > > is already set by common code on vcpu creation.
> > > > 
> > > > This is because the common code expects that the guest is moving from
> > > > sharedinfo based vcpu info using VCPUOP_register_vcpu_info on x86, but
> > > > on ARM we start off that way anyway.
> > > > 
> > > > I suppose it's a minor wrinkle, but I wonder if we can get rid of it...
> > > 
> > > Do you mean getting rid of evtchn_upcall_pending being set at vcpu
> > > creation? Wouldn't that cause possible undesirable side effects to
> > > guests that came to rely on it somehow? It would be an ABI change.
> > 
> > I mean precisely for the boot cpu when it is brought online, there isn't
> > much sense in immediately taking an interrupt when that cpu enables
> > them.
> > 
> > The reason for setting it right now is only for the case of a cpu moving
> > its vcpu info, to ensure it can't miss anything.
> 
> What about secondary vcpus? Should we keep the spurious injection for
> them?

Not sure. No?

> In any case I agree with you that the current behaviour is not nice,
> however I am worried about changing a guest visible interface like this
> one, that would affect x86 guests too.

Oh, I certainly wouldn't change this for x86! Or maybe I would change it
but only for cpus which are not online at the time when the init happens
(which is effectively the difference between the x86 and arm cases)



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.