[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 32/39] ARM: new VGIC: Implement arch_move_irqs()



Hi,

On 28/03/18 19:47, Stefano Stabellini wrote:
> On Wed, 21 Mar 2018, Andre Przywara wrote:
>> When a VCPU moves to another CPU, we need to adjust the target affinity
>> of any hardware mapped vIRQs, to observe our "physical-follows-virtual"
>> policy.
>> Implement arch_move_irqs() to adjust the physical affinity of all hardware
>> mapped vIRQs targetting this VCPU.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxxxxx>
>> Reviewed-by: Julien Grall <julien.grall@xxxxxxx>
>> ---
>>  xen/arch/arm/vgic/vgic.c | 39 +++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 39 insertions(+)
>>
>> diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
>> index ffab0b2635..23b8abfc5e 100644
>> --- a/xen/arch/arm/vgic/vgic.c
>> +++ b/xen/arch/arm/vgic/vgic.c
>> @@ -791,6 +791,45 @@ void gic_dump_vgic_info(struct vcpu *v)
>>      spin_unlock_irqrestore(&v->arch.vgic.ap_list_lock, flags);
>>  }
>>  
>> +/**
>> + * arch_move_irqs() - migrate the physical affinity of hardware mapped vIRQs
>> + * @v:  the vCPU, already assigned to the new pCPU
>> + *
>> + * arch_move_irqs() updates the physical affinity of all virtual IRQs
>> + * targetting this given vCPU. This only affects hardware mapped IRQs. The
>> + * new pCPU to target is already set in v->processor.
>> + * This is called by the core code after a vCPU has been migrated to a new
>> + * physical CPU.
>> + */
>> +void arch_move_irqs(struct vcpu *v)
>> +{
>> +    struct domain *d = v->domain;
>> +    unsigned int i;
>> +
>> +    /* We only target SPIs with this function */
>> +    for ( i = 0; i < d->arch.vgic.nr_spis; i++ )
>> +    {
>> +        struct vgic_irq *irq = vgic_get_irq(d, NULL, i + 
>> VGIC_NR_PRIVATE_IRQS);
>> +        unsigned long flags;
>> +
>> +        if ( !irq )
>> +            continue;
>> +
>> +        spin_lock_irqsave(&irq->irq_lock, flags);
>> +
>> +        /* only vIRQs that are not on a vCPU yet , but targetting this vCPU 
>> */
>> +        if ( irq->hw && !irq->vcpu && irq->target_vcpu == v)
>> +        {
> 
> In vgic_mmio_write_target, we change the physical irq affinity
> immediately, without checking for !irq->vcpu.

> I think it is OK because if a second interrupt for vcpuB comes in cpuB
> while it is still injected in vcpuA/cpuA, vgic_get_irq returns the same
> vgic_irq instance, vgic_inject_irq sets pending_latch to true.
> vgic_queue_irq_unlock does nothing because irq->vcpu is set. Then when
> vcpuA traps into Xen, vgic_prune_ap_list will take care of moving the
> vgic_irq to the ap_list belonging to vcpuB.
> 
> This seems to work, but don't we also need a vcpu_kick at the end of
> vgic_prune_ap_list to make sure the changes take effect in vcpuB? vcpuB
> could take an very long time to trap back into Xen again.

That seems like a valid question to me. But as KVM should be in the same
situation and there is no kick() there, I would like to forward this
question to Christoffer and Marc - who will probably not answer before
Tuesday. So I would ask for a followup fix if this is considered legitimate.

> But the real question is: why do we need to check for !irq->vcpu here?
> And worse: if an interrupt has irq->vcpu set, then who will take care of
> fixing the physical irq affinity later?

I think you are right, we don't need to consider irq->vcpu here. Similar
to what we now emulate, the affinity is only of concern for the *next*
IRQ to trigger. Currently handled IRQs are not affected, and a changed
affinity will not change anything for them.

> It looks like we should remove the "&& !irq->vcpu" here so that we can
> rely on the same mechanism already in place for ITARGETSR changes.
> However, would that work with already active interrupts? I think it
> should but I wanted to double check.

Looks all right to me, I will remove the "&& !irq->vcpu".

Cheers,
Andre.

>> +            irq_desc_t *desc = irq_to_desc(irq->hwintid);
>> +
>> +            irq_set_affinity(desc, cpumask_of(v->processor));
>> +        }
>> +
>> +        spin_unlock_irqrestore(&irq->irq_lock, flags);
>> +        vgic_put_irq(d, irq);
>> +    }
>> +}
>> +
>>  struct irq_desc *vgic_get_hw_irq_desc(struct domain *d, struct vcpu *v,
>>                                        unsigned int virq)
>>  {

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.