[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 10/36] ARM: GIC: Add checks for NULL pointer pending_irq's



On 07/04/17 20:07, Stefano Stabellini wrote:
> On Fri, 7 Apr 2017, Andre Przywara wrote:
>> For LPIs the struct pending_irq's are somewhat dynamically allocated and
>> the pointers are stored in a radix tree. While I convinced myself that
>> an invalid LPI number can't make it into the core code, people might be
>> concerned about NULL pointer dereferences.
>> So add checks in some places just to be on the safe side.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
> 
> This approach looks fragile: what if we miss one irq_to_pending call? We
> need a way to avoid pending_irq structs being freed when an irq is still
> inflight. Only after an irq is not inflight anymore, a struct
> pending_irq could be freed.

Indeed. I was wondering if a dummy pend_irq could help on the first
step: Upon unmapping, we replace the radix-tree member(s) with this one
reserved instance (per domain), that would avoid the NULL pointer
dereference.
Then we keep the existing pend_irq (array) around until we are sure that
no-one is holding a reference anymore - either by using RCU (although I
think this is problematic because of the rcu_read_lock) or by finding a
definite point in time when no-one can possibly use that pointer
anymore. The pointer usage seems to be very confined, so I am hopeful we
can find such a limit (say: once every VCPU has exited and entered once
or the like).

Does that sound like a possible route?

Cheers,
Andre.


> 
> 
>> ---
>>  xen/arch/arm/gic.c  | 23 +++++++++++++++++++++++
>>  xen/arch/arm/vgic.c |  4 ++++
>>  2 files changed, 27 insertions(+)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index da19130..44c34b1 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -405,6 +405,13 @@ void gic_remove_from_queues(struct vcpu *v, unsigned 
>> int virtual_irq)
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>>      unsigned long flags;
>>  
>> +    /*
>> +     * If an LPIs has been removed meanwhile, it has been cleaned up
>> +     * already, so nothing to remove here.
>> +     */
>> +    if ( !p )
>> +        return;
>> +
>>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
>>      if ( !list_empty(&p->lr_queue) )
>>          list_del_init(&p->lr_queue);
>> @@ -415,6 +422,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned 
>> int virtual_irq)
>>  {
>>      struct pending_irq *n = irq_to_pending(v, virtual_irq);
>>  
>> +    /* If an LPI has been removed meanwhile, there is nothing left to 
>> raise. */
>> +    if ( !n )
>> +        return;
>> +
>>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>>  
>>      if ( list_empty(&n->lr_queue) )
>> @@ -461,7 +472,19 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>>  
>>      gic_hw_ops->read_lr(i, &lr_val);
>>      irq = lr_val.virq;
>> +
>>      p = irq_to_pending(v, irq);
>> +    /* An LPI might have been unmapped, in which case we just clean up 
>> here. */
>> +    if ( !p )
>> +    {
>> +        ASSERT(is_lpi(irq));
>> +
>> +        gic_hw_ops->clear_lr(i);
>> +        clear_bit(i, &this_cpu(lr_mask));
>> +
>> +        return;
>> +    }
>> +
>>      if ( lr_val.state & GICH_LR_ACTIVE )
>>      {
>>          set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 83569b0..b7ee105 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -470,6 +470,10 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int 
>> virq)
>>      unsigned long flags;
>>      bool running;
>>  
>> +    /* If an LPI has been removed, there is nothing to inject here. */
>> +    if ( !n )
>> +        return;
>> +
>>      priority = vgic_get_virq_priority(v, virq);
>>  
>>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
>> -- 
>> 2.9.0
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.