[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: introduce platform_need_explicit_eoi



On Fri, 20 Jun 2014, Julien Grall wrote:
> On 20/06/14 15:27, Stefano Stabellini wrote:
> > On Fri, 20 Jun 2014, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 20/06/14 14:35, Stefano Stabellini wrote:
> > > > GICH_LR_HW doesn't work as expected on X-Gene: request maintenance
> > > > interrupts and perform EOIs in the hypervisor as a workaround.
> > > > Trigger this behaviour with a per platform option.
> > > > 
> > > > No need to find the pcpu that needs to write to GICC_DIR, because after
> > > > "physical irq follow virtual irq" we always inject the virtual irq on
> > > > the vcpu that is running on the pcpu that received the interrupt.
> > > 
> > > Even without the patch "physical irq follow virtual irq" we can EOI an
> > > SPIs on
> > > any physical CPUs.
> > 
> > Can you point me to the paragraph of the GIC spec that says that?
> 
> While it's clearly specified in the spec that the GICC_EOIR must be done in
> the same physical CPU. Nothing has been specified for GICC_DIR.

That's a pity.
I suspect that you are right but I don't like to make an assumption like
that without the spec clearly supporting it.


> I'm assuming this is the case, because even with the HW bit the physical
> interrupt may be EOIed on a different CPU. Think about the VCPU has been
> migrated when the guest is still handling the interrupt...

I also think it should be OK, but can we be sure that all the SOC
vendors are going to use a GICv2 that supports this behaviour?


> Your patch "physical irq follow virtual irq" won't even solve this problem if
> the maintenance interrupt is request to EOI the IRQ.

I don't understand this statement.
The maintenance interrupt is requested to make sure that the vcpu is
interrupted as soon as it EOIs the virtual irq, so that
gic_update_one_lr can run.



> > > Otherwise, even with this patch, you will have to find pcpu when the VCPU
> > > has
> > > been migrated by EOI the IRQ :).
> > > I would rework this last paragraph to explain this is how the GIC behave.
> > > 
> > > >    static void gic_update_one_lr(struct vcpu *v, int i)
> > > >    {
> > > >        struct pending_irq *p;
> > > > @@ -692,7 +699,11 @@ static void gic_update_one_lr(struct vcpu *v, int
> > > > i)
> > > >            clear_bit(i, &this_cpu(lr_mask));
> > > > 
> > > >            if ( p->desc != NULL )
> > > > +        {
> > > >                p->desc->status &= ~IRQ_INPROGRESS;
> > > > +            if ( platform_has_quirk(PLATFORM_QUIRK_NEED_EOI) )
> > > > +                gic_irq_eoi(p->irq);
> > > > +        }
> > > >            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > > >            clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
> > > >            p->lr = GIC_INVALID_LR;
> > > 
> > > With this change, shouldn't we clear the bit and the LR before EOI the
> > > IRQ?
> > 
> > Conceptually you might be right but it is all happening with the  vgic
> > lock taken, so it shouldn't make any difference.
> 
> The VGIC lock is per-vcpu.
> 
> If this is happening while we migrate, nothing protect p->lr and the different
> bit anymore.

The patch series
alpine.DEB.2.02.1406111607450.13771@xxxxxxxxxxxxxxxxxxxxxxx takes
care of vcpu migration and locking.


> Indeed, the vgic_inject_irq will use the new VCPU. This can happen as soon as
> we EOI the IRQ.

Yes, but at that point we have also already migrated the corresponding
physical irq to the new pcpu.

However looking back again at my series, if the guest kernel keeps
writing to GICD_ITARGETSR while the interrupt is firing, there is a
small window of time when it could be possible to read GICC_IAR in Xen
on a different pcpu than the one we are injecting the virtual irq.
It involves migrating an interrupt more than once before EOIing the one
that is currently INPROGRESS.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.