[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XEN PATCH v3 07/11] xen: arm: vgic: allow delivery of PPIs to guests



On Tue, 26 Nov 2019, Julien Grall wrote:
> On 26/11/2019 22:36, Stefano Stabellini wrote:
> > On Mon, 25 Nov 2019, Julien Grall wrote:
> > > On 23/11/2019 20:35, Julien Grall wrote:
> > > > Hi,
> > > > 
> > > > On 15/11/2019 20:10, Stewart Hildebrand wrote:
> > > > > Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
> > > > > 
> > > > > Use vcpu argument in vgic_connect_hw_irq.
> > > > > 
> > > > > vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce
> > > > > with
> > > > > ASSERTs.
> > > > > 
> > > > > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxxxxxxxxxx>
> > > > > 
> > > > > ---
> > > > > v3: new patch
> > > > > 
> > > > > ---
> > > > > Note: I have only modified the old vgic to allow delivery of PPIs.
> > > > 
> > > > The new vGIC should also be modified to support delivery of PPIs.
> > > > 
> > > > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > > > > index 82f524a35c..c3933c2687 100644
> > > > > --- a/xen/arch/arm/vgic.c
> > > > > +++ b/xen/arch/arm/vgic.c
> > > > > @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t
> > > > > r,
> > > > > int n)
> > > > >                irq_set_affinity(p->desc,
> > > > > cpumask_of(v_target->processor));
> > > > >                spin_lock_irqsave(&p->desc->lock, flags);
> > > > >                /*
> > > > > -             * The irq cannot be a PPI, we only support delivery of
> > > > > SPIs
> > > > > -             * to guests.
> > > > > +             * The irq cannot be a SGI, we only support delivery of
> > > > > SPIs
> > > > > +             * and PPIs to guests.
> > > > >                 */
> > > > > -            ASSERT(irq >= 32);
> > > > > +            ASSERT(irq >= NR_SGIS);
> > > > 
> > > > We usually put ASSERT() in place we know that code wouldn't be able to
> > > > work
> > > > correctly if there ASSERT were hit. In this particular case:
> > > > 
> > > > >                if ( irq_type_set_by_domain(d) )
> > > > >                    gic_set_irq_type(p->desc, vgic_get_virq_type(v, n,
> > > > > i));
> > > > 
> > > > 1) We don't want to allow any domain (including Dom0) to modify the
> > > > interrupt type (i.e. level/edge) for PPIs as this is shared. You will
> > > > also
> > > > most likely need to modify the counterpart in setup_guest_irq().
> > > > 
> > > > >                p->desc->handler->enable(p->desc);
> > > > 
> > > > 2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So
> > > > vCPU B
> > > > could enable the SGI for vCPU A. But this would be called on the wrong
> > > > pCPU
> > > > leading to inconsistency between the hardware state of the internal vGIC
> > > > state.
> > > 
> > > I thought a bit more of the issue over the week-end. The current vGIC is
> > > fairly messy. I can see two solutions on how to solve this:
> > >      1) Send an IPI to the pCPU where the vCPU A is running and
> > > disable/enable
> > > the interrupt. The other side would need to the vCPU was actually running
> > > to
> > > avoid disabling the PPI for the wrong pCPU
> > >      2) Keep the HW interrupt always enabled
> > > 
> > > We propagated the enable/disable because of some messy part in the vGIC:
> > >      - vgic_inject_irq() will not queue any pending interrupt if the vCPU
> > > is
> > > offline. While interrupt cannot be delivered, we still need to keep them
> > > pending as they will never occur again otherwise. This is because they are
> > > active on the host side and the guest has no way to deactivate them.
> > >      - Our implementation of PSCI CPU will remove all pending interrupts
> > > (see
> > > vgic_clear_pending_irqs()). I am not entirely sure the implication here
> > > because of the previous.
> > > 
> > > There are a probably more. Aside the issues with it, I don't really see
> > > good
> > > advantage to propagate the interrupt state as the interrupts (PPIs, SPIs)
> > > have
> > > active state. So they can only be received once until the guest actually
> > > handles it.
> > > 
> > > So my preference would still be 2) because this makes the code simpler,
> > > avoid
> > > IPI and other potential locking trouble.
> > 
> > Yes, I think that is a good suggestion. I take that you mean that in
> > vgic_disable_irqs for PPIs we would only clear GIC_IRQ_GUEST_ENABLED
> > then return basically, right?
> Not really, I am only suggesting to remove the part
> 
> if ( desc != NULL )
>   ...

I think we are saying the same thing


> But this change alone is not enough. It would require some modification in the
> rest of the vGIC (see my previous e-mail) and likely some investigation to
> understand the implication of keeping the interrupt enabled from the HW (I am
> a bit worry we may have backed this assumption into other part of the vGIC
> :().

I can see that at least save_and_mask_hwppi and restore_hwppi would need
to be modified to account for the fact that GICD_ISENABLER would say "it
is enabled" but actually GIC_IRQ_GUEST_ENABLED is unset.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.