[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 31/49] ARM: new VGIC: Add PENDING registers handlers



Hi,

On 16/02/18 17:16, Julien Grall wrote:
> Hi Andre,
> 
> On 09/02/18 14:39, Andre Przywara wrote:
>> The pending register handlers are shared between the v2 and v3
>> emulation, so their implementation goes into vgic-mmio.c, to be easily
>> referenced from the v3 emulation as well later.
>> For level triggered interrupts the real line level is unaffected by
>> this write, so we keep this state separate and combine it with the
>> device's level to get the actual pending state.
>>
>> This is based on Linux commit 96b298000db4, written by Andre Przywara.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxxxxx>
>> ---
>>   xen/arch/arm/vgic/vgic-mmio-v2.c |  4 +--
>>   xen/arch/arm/vgic/vgic-mmio.c    | 62
>> ++++++++++++++++++++++++++++++++++++++++
>>   xen/arch/arm/vgic/vgic-mmio.h    | 11 +++++++
>>   3 files changed, 75 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c
>> b/xen/arch/arm/vgic/vgic-mmio-v2.c
>> index eca6840ff9..ceb86900a0 100644
>> --- a/xen/arch/arm/vgic/vgic-mmio-v2.c
>> +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c
>> @@ -80,10 +80,10 @@ static const struct vgic_register_region
>> vgic_v2_dist_registers[] = {
>>           vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1,
>>           VGIC_ACCESS_32bit),
>>       REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISPENDR,
>> -        vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1,
>> +        vgic_mmio_read_pending, vgic_mmio_write_spending, NULL, NULL, 1,
>>           VGIC_ACCESS_32bit),
>>       REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ICPENDR,
>> -        vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1,
>> +        vgic_mmio_read_pending, vgic_mmio_write_cpending, NULL, NULL, 1,
>>           VGIC_ACCESS_32bit),
>>       REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISACTIVER,
>>           vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1,
>> diff --git a/xen/arch/arm/vgic/vgic-mmio.c
>> b/xen/arch/arm/vgic/vgic-mmio.c
>> index 3d9fa02a10..9a65e39d78 100644
>> --- a/xen/arch/arm/vgic/vgic-mmio.c
>> +++ b/xen/arch/arm/vgic/vgic-mmio.c
>> @@ -153,6 +153,68 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu,
>>       }
>>   }
>>   +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu,
>> +                     paddr_t addr, unsigned int len)
> 
> Indentation.
> 
>> +{
>> +    u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> 
> uint32_t
> 
>> +    u32 value = 0;
> 
> uint32_t
> 
>> +    int i;
> 
> unsigned int
> 
>> +
>> +    /* Loop over all IRQs affected by this read */
>> +    for ( i = 0; i < len * 8; i++ )
>> +    {
>> +        struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid
>> + i);
>> +
>> +        if ( irq_is_pending(irq) )
>> +            value |= (1U << i);
> 
> Don't you need to propagate the value to the hardware for virtual
> interrupt mapped to physical interrupt?

Do you mean in the write functions below? (This is the read function, I
don't see how this would apply here.)

In case you meant the write_[cs]pending() functions:
I don't think this makes too much sense. Why would you want to trigger
an hardware IRQ? All you want it is to deliver it to the guest, which is
what those functions below do. So what do I miss here?

Cheers,
Andre.


>> +
>> +        vgic_put_irq(vcpu->domain, irq);
>> +    }
>> +
>> +    return value;
>> +}
>> +
>> +void vgic_mmio_write_spending(struct vcpu *vcpu,
>> +                  paddr_t addr, unsigned int len,
>> +                  unsigned long val)
>> +{
>> +    u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> 
> uint32_t
> 
>> +    int i;
> 
> unsigned int
> 
>> +    unsigned long flags;
>> +
>> +    for_each_set_bit( i, &val, len * 8 )
>> +    {
>> +        struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid
>> + i);
>> +
>> +        spin_lock_irqsave(&irq->irq_lock, flags);
>> +        irq->pending_latch = true;
> 
> Ditto.
> 
>> +
>> +        vgic_queue_irq_unlock(vcpu->domain, irq, flags);
>> +        vgic_put_irq(vcpu->domain, irq);
>> +    }
>> +}
>> +
>> +void vgic_mmio_write_cpending(struct vcpu *vcpu,
>> +                  paddr_t addr, unsigned int len,
>> +                  unsigned long val)
>> +{
>> +    u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
>> +    int i;
>> +    unsigned long flags;
>> +
>> +    for_each_set_bit( i, &val, len * 8 )
>> +    {
>> +        struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid
>> + i);
>> +
>> +        spin_lock_irqsave(&irq->irq_lock, flags);
>> +
>> +        irq->pending_latch = false;
>> +
>> +        spin_unlock_irqrestore(&irq->irq_lock, flags);
>> +        vgic_put_irq(vcpu->domain, irq);
>> +    }
>> +}
>> +
>>   static int match_region(const void *key, const void *elt)
>>   {
>>       const unsigned int offset = (unsigned long)key;
>> diff --git a/xen/arch/arm/vgic/vgic-mmio.h
>> b/xen/arch/arm/vgic/vgic-mmio.h
>> index 9f34bd1aec..209afbbb9a 100644
>> --- a/xen/arch/arm/vgic/vgic-mmio.h
>> +++ b/xen/arch/arm/vgic/vgic-mmio.h
>> @@ -148,6 +148,17 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu,
>>                    paddr_t addr, unsigned int len,
>>                    unsigned long val);
>>   +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu,
>> +                     paddr_t addr, unsigned int len);
>> +
>> +void vgic_mmio_write_spending(struct vcpu *vcpu,
>> +                  paddr_t addr, unsigned int len,
>> +                  unsigned long val);
>> +
>> +void vgic_mmio_write_cpending(struct vcpu *vcpu,
>> +                  paddr_t addr, unsigned int len,
>> +                  unsigned long val);
>> +
>>   unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev);
>>     /* Find the proper register handler entry given a certain address
>> offset */
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.