|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] xen/arm: observe itarget setting in vgic_enable_irqs and vgic_disable_irqs
On Wed, 28 May 2014, Ian Campbell wrote:
> On Sun, 2014-05-25 at 19:06 +0100, Stefano Stabellini wrote:
> > vgic_enable_irqs should enable irq delivery to the vcpu specified by
> > GICD_ITARGETSR, rather than the vcpu that wrote to GICD_ISENABLER.
> > Similarly vgic_disable_irqs should use the target vcpu specified by
> > itarget to disable irqs.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > ---
> > xen/arch/arm/vgic.c | 42 ++++++++++++++++++++++++++++++++++--------
> > 1 file changed, 34 insertions(+), 8 deletions(-)
> >
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index e4f38a0..0f0ba1d 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -376,12 +376,25 @@ static void vgic_disable_irqs(struct vcpu *v,
> > uint32_t r, int n)
> > unsigned int irq;
> > unsigned long flags;
> > int i = 0;
> > + int target;
> > + struct vcpu *v_target;
> > + struct vgic_irq_rank *rank;
> >
> > while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> > irq = i + (32 * n);
> > - p = irq_to_pending(v, irq);
> > + rank = vgic_irq_rank(v, 1, irq/32);
> > + vgic_lock_rank(v, rank);
> > + if ( irq >= 32 )
> > + {
> > + target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> > + target &= 0xff;
>
> This is byte_read(), isn't it?
yes, I'll use it
> > + v_target = v->domain->vcpu[target];
>
> There needs to be some sort of range check here I think. Else you are
> setting a trap for whoever implements itargets properly.
The check should be at the point where we write itargets, not here.
The previous patch already introduces a check for itargets to always be
zero.
> > + } else
> > + v_target = v;
> > + vgic_unlock_rank(v, rank);
> > + p = irq_to_pending(v_target, irq);
> > clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > - gic_remove_from_queues(v, irq);
> > + gic_remove_from_queues(v_target, irq);
> > if ( p->desc != NULL )
> > {
> > spin_lock_irqsave(&p->desc->lock, flags);
> > @@ -399,21 +412,34 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t
> > r, int n)
> > unsigned int irq;
> > unsigned long flags;
> > int i = 0;
> > + int target;
> > + struct vcpu *v_target;
> > + struct vgic_irq_rank *rank;
> >
> > while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> > irq = i + (32 * n);
> > - p = irq_to_pending(v, irq);
> > + rank = vgic_irq_rank(v, 1, irq/32);
> > + vgic_lock_rank(v, rank);
> > + if ( irq >= 32 )
> > + {
> > + target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> > + target &= 0xff;
> > + v_target = v->domain->vcpu[target];
> > + } else
> > + v_target = v;
>
> This is the same code as above -- there should be a helper
> (get_target_vcpu or some such).
Good idea
> > + vgic_unlock_rank(v, rank);
> > + p = irq_to_pending(v_target, irq);
> > set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > - if ( irq == v->domain->arch.evtchn_irq &&
> > + if ( irq == v_target->domain->arch.evtchn_irq &&
> > vcpu_info(current, evtchn_upcall_pending) &&
> > list_empty(&p->inflight) )
> > - vgic_vcpu_inject_irq(v, irq);
> > + vgic_vcpu_inject_irq(v_target, irq);
> > else {
> > unsigned long flags;
> > - spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > + spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> > if ( !list_empty(&p->inflight) &&
> > !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> > - gic_raise_guest_irq(v, irq, p->priority);
> > - spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > + gic_raise_guest_irq(v_target, irq, p->priority);
> > + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
> > }
> > if ( p->desc != NULL )
> > {
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |