[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 12/24] xen/arm: Release IRQ routed to a domain when it's destroying



Hi Ian,

On 20/02/15 16:31, Ian Campbell wrote:
> On Tue, 2015-01-13 at 14:25 +0000, Julien Grall wrote:
> 
>> Futhermore, a guest can crash and let the IRQ in an incorrect state (i.e has
> 
> "Furthermore" (I think your finger macros have this one wrong, you might
> want to grep the series ;-))

I do this mistake every-time. I will check this series (and upcoming series)

> 
>> +    /* TODO: Handle eviction from LRs. For now, deny remove if the IRQ
>> +     * is inflight and not disabled.
> 
> If we are ungracefully killing a guest doesn't this risk ending up with
> an undestroyable domain? i.e. if the guest is paused with an inflight
> IRQ and then destroyed, how does the inflight IRQ ever become
> not-inflight again? A similar argument could apply if the guest has e.g.
> crashed, paniced or is otherwise not processing any more interrupts or
> generating EOIs for existing ones.

No, this will fall on the "is_dying" part. During domain destruction,
the hypervisor will release all the IRQ still assigned to the guest one
by one.

> We need to be able to kill a guest in such a state somehow.

The TODO is here for running domain where we try to deassign an inflight
IRQ.

> (BTW, I notice the comment style is consistently wrong through the
> series)

I will check all the patches.

>> +    /* Only SPIs are supported */
>> +    if ( virq < 32 || virq >= vgic_num_irqs(d) )
>> +        return -EINVAL;
>> +
>> +    p = irq_to_pending(d->vcpu[0], virq);
>> +    if ( !p->desc )
>> +        return -EINVAL;
> 
> Are we seeing this pattern a lot? It seems so, I wonder if a helper
> could be useful:
>        p = spi_to_pending(d, virq);
>        if ( !p->desc )
>            return -EINVAL;
> 
> with the < 32 check etc in the helper where it only needs commenting on
> once.

IIRC, there is only 2 places. I will replace it by an helper.

>> +
>> +    desc = p->desc;
>> +
>> +    spin_lock_irqsave(&desc->lock, flags);
>> +
>> +    ret = -EINVAL;
>> +    if ( !test_bit(_IRQ_GUEST, &desc->status) )
>> +        goto unlock;
>> +
>> +    ret = -EINVAL;
> 
> A bit redundant, or should nestle the info = like you did above with the
> test_bit.

I will invert it.

>> +    info = irq_get_guest_info(desc);
>> +    if ( d != info->d )
>> +        goto unlock;
>> +
>> +    ret = gic_remove_irq_from_guest(d, virq, desc);
>> +
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>> +
>> +    if ( !ret )
> 
> This is a bit unconventional (it looks essentially like you have three
> exit paths, with this extra special success case and two error cases).
> 
> It would better if the gic_remove_irq_from_guest failure case went to
> unlock like all the other failure cases, then the success path would be
> a straight line.
> 
> (yes, that would mean a goto unlock right before the first spin_unlock,
> but the code flow would be clearer).

I will do.

> 
>> +    {
>> +        release_irq(desc->irq, info);
>> +        xfree(info);
>> +    }
>> +
>> +    return ret;
>> +
>> +unlock:
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>> +
>> +    return ret;
>> +}
>> +
>>  /*
>>   * pirq event channels. We don't use these on ARM, instead we use the
>>   * features of the GIC to inject virtualised normal interrupts.
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index fc8a270..4ddfd73 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -133,6 +133,22 @@ void register_vgic_ops(struct domain *d, const struct 
>> vgic_ops *ops)
>>  
>>  void domain_vgic_free(struct domain *d)
>>  {
>> +    int i;
>> +    int ret;
>> +
>> +    for ( i = 0; i < (d->arch.vgic.nr_spis); i++ )
>> +    {
>> +        struct pending_irq *p = &d->arch.vgic.pending_irqs[i];
> 
> Is there not a helper for this lookup? If so it should be used.

The irq_pending code is adding extra-check. But I guess we don't care
for domain destruction?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.