[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 32/57] ARM: new VGIC: Add GICv2 world switch backend



Hi,

On 07/03/18 12:10, Julien Grall wrote:
> Hi Andre,
> 
> On 03/05/2018 04:03 PM, Andre Przywara wrote:
>> +void vgic_v2_fold_lr_state(struct vcpu *vcpu)
>> +{
>> +    struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic;
>> +    unsigned int used_lrs = vcpu->arch.vgic.used_lrs;
>> +    unsigned long flags;
>> +    unsigned int lr;
>> +
>> +    if ( !used_lrs )    /* No LRs used, so nothing to sync back here. */
>> +        return;
>> +
>> +    gic_hw_ops->update_hcr_status(GICH_HCR_UIE, 0);
>> +
>> +    for ( lr = 0; lr < used_lrs; lr++ )
>> +    {
>> +        struct gic_lr lr_val;
>> +        uint32_t intid;
>> +        struct vgic_irq *irq;
>> +
>> +        gic_hw_ops->read_lr(lr, &lr_val);
>> +
>> +        /*
>> +         * TODO: Possible optimization to avoid reading LRs:
>> +         * Read the ELRSR to find out which of our LRs have been cleared
>> +         * by the guest. We just need to know the IRQ number for
>> those, which
>> +         * we could save in an array when populating the LRs.
>> +         * This trades one MMIO access (ELRSR) for possibly more than
>> one (LRs),
>> +         * but requires some more code to save the IRQ number and to
>> handle
>> +         * those finished IRQs according to the algorithm below.
>> +         * We need some numbers to justify this: chances are that we
>> don't
>> +         * have many LRs in use most of the time, so we might not
>> save much.
>> +         */
>> +        gic_hw_ops->clear_lr(lr);
>> +
>> +        intid = lr_val.virq;
>> +        irq = vgic_get_irq(vcpu->domain, vcpu, intid);
>> +
>> +        spin_lock_irqsave(&irq->irq_lock, flags);
>> +
>> +        /* Always preserve the active bit */
>> +        irq->active = !!(lr_val.state & GICH_LR_ACTIVE);
>> +
>> +        /* Edge is the only case where we preserve the pending bit */
>> +        if ( irq->config == VGIC_CONFIG_EDGE && (lr_val.state &
>> GICH_LR_PENDING) )
>> +        {
>> +            irq->pending_latch = true;
>> +
>> +            if ( vgic_irq_is_sgi(intid) )
>> +                irq->source |= (1U << lr_val.source);
>> +        }
> 
> KVM is clearing pending_latch for level IRQ. Why this is not done in Xen?

Good question. I spotted this myself on Monday when adding vGICv3 support.
I checked an old branch, I accidentally removed it when merging in some
later KVM changes.
So it's already back in my tree.

Cheers,
Andre.

> 
>> +
>> +    /*
>> +     * Level-triggered mapped IRQs are special because we only
>> +     * observe rising edges as input to the VGIC.
>> +     *
>> +     * If the guest never acked the interrupt we have to sample
>> +     * the physical line and set the line level, because the
>> +     * device state could have changed or we simply need to
>> +     * process the still pending interrupt later.
>> +     *
>> +     * If this causes us to lower the level, we have to also clear
>> +     * the physical active state, since we will otherwise never be
>> +     * told when the interrupt becomes asserted again.
>> +     */
> 
> The indentation of the comment looks wrong.
> 
>> +        if ( vgic_irq_is_mapped_level(irq) && (lr_val.state &
>> GICH_LR_PENDING) )
>> +        {
>> +            struct irq_desc *irqd;
>> +
>> +            ASSERT(irq->hwintid >= VGIC_NR_PRIVATE_IRQS);
>> +
>> +            irqd = irq_to_desc(irq->hwintid);
>> +            irq->line_level = gic_read_pending_state(irqd);
>> +
>> +            if ( !irq->line_level )
>> +                gic_set_active_state(irqd, false);
>> +        }
>> +
>> +        spin_unlock_irqrestore(&irq->irq_lock, flags);
>> +        vgic_put_irq(vcpu->domain, irq);
>> +    }
>> +
>> +    gic_hw_ops->update_hcr_status(GICH_HCR_EN, 0);
>> +    vgic_cpu->used_lrs = 0;
>> +}
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.