[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 01/32] ARM: vGIC: avoid rank lock when reading priority



Hi,

On 30/05/17 11:47, Julien Grall wrote:
> Hi Andre,
> 
> On 26/05/17 18:35, Andre Przywara wrote:
>> When reading the priority value of a virtual interrupt, we were taking
>> the respective rank lock so far.
>> However for forwarded interrupts (Dom0 only so far) this may lead to a
>> deadlock with the following call chain:
>> - MMIO access to change the IRQ affinity, calling the ITARGETSR handler
>> - this handler takes the appropriate rank lock and calls
>> vgic_store_itargetsr()
>> - vgic_store_itargetsr() will eventually call vgic_migrate_irq()
>> - if this IRQ is already in-flight, it will remove it from the old
>>   VCPU and inject it into the new one, by calling vgic_vcpu_inject_irq()
>> - vgic_vcpu_inject_irq will call vgic_get_virq_priority()
>> - vgic_get_virq_priority() tries to take the rank lock - again!
>> It seems like this code path has never been exercised before.
>>
>> Fix this by avoiding taking the lock in vgic_get_virq_priority() (like we
>> do in vgic_get_target_vcpu()).
>> Actually we are just reading one byte, and priority changes while
>> interrupts are handled are a benign race that can happen on real hardware
>> too. So it looks safe to just use read_atomic() instead.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
>> ---
>>  xen/arch/arm/vgic.c | 8 +-------
>>  1 file changed, 1 insertion(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 83569b0..54b2aad 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -227,14 +227,8 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v,
>> unsigned int virq)
>>  static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
>>  {
>>      struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
>> -    unsigned long flags;
>> -    int priority;
>> -
>> -    vgic_lock_rank(v, rank, flags);
>> -    priority = rank->priority[virq & INTERRUPT_RANK_MASK];
>> -    vgic_unlock_rank(v, rank, flags);
>>
>> -    return priority;
>> +    return read_atomic(&rank->priority[virq & INTERRUPT_RANK_MASK]);
> 
> The write in rank->priority will not be atomic (see vgic_reg_update
> implementation): the register is first masked, the the priority set.
> 
> So you may end up to read 0 (which is the higher priority) by mistake.
> 
> We should probably think to make vgic_reg_* helper atomic.

The patch you sent yesterday may be a bit over the top for this.

What about this one (in xen/arch/arm/vgic-v[23].c):

static int vgic_v2_distr_mmio_write(struct vcpu *v, ...
{
-    uint32_t *ipriorityr;
+    uint32_t *ipriorityr, priority;
....
     vgic_lock_rank(v, rank, flags);
     ipriorityr = &rank->ipriorityr[REG_RANK_INDEX(8,
                                            gicd_reg - GICD_IPRIORITYR,
                                            DABT_WORD)];
-    vgic_reg32_update(ipriorityr, r, info);
+    priority = ACCESS_ONCE(*ipriorityr);
+    vgic_reg32_update(&priority, r, info);
+    ACCESS_ONCE(*ipriorityr) = priority;
     vgic_unlock_rank(v, rank, flags);
....

Concurrent writes are protected by the rank lock, and reads are
guaranteed to be atomic due to ACCESS_ONCE and the architectural
guarantee of an aligned uint32_t access being atomic.
We can't do anything about the benign race when the priority gets
changed while we read it, but the above should make sure that we either
read the old or the new value and nothing in between.
The other read accesses (in vgic_v[23]_distr_mmio_read() and in vgic.c
here in this patch) get protected by an ACCESS_ONCE().

Does that make sense?

Cheers,
Andre.

> 
>>  }
>>
>>  bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned
>> int irq)
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.