[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/10] xen/arm: Release maintenance interrupt when CPU is hot-unplugged



Hi Julien,

On Mon, Apr 23, 2018 at 1:33 PM, Julien Grall <julien.grall@xxxxxxx> wrote:
> Hi Mirela,
>
>
> On 20/04/18 13:25, Mirela Simonovic wrote:
>>
>> When a CPU is hot-unplugged the maintenance interrupt has to be
>> released in order to free the memory that was allocated when the CPU
>> was hotplugged and interrupt requested. The interrupt was requested
>> using request_irq() which is called from start_secondary->
>> init_maintenance_interrupt.
>>
>> Signed-off-by: Mirela Simonovic <mirela.simonovic@xxxxxxxxxx>
>>
>> ---
>> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
>> CC: Julien Grall <julien.grall@xxxxxxx>
>> ---
>>   xen/arch/arm/gic.c        | 5 +++++
>>   xen/arch/arm/smpboot.c    | 7 +++++++
>>   xen/include/asm-arm/gic.h | 1 +
>>   3 files changed, 13 insertions(+)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index 653a815127..e536b99e84 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -431,6 +431,11 @@ void init_maintenance_interrupt(void)
>>                   "irq-maintenance", NULL);
>>   }
>>   +void deinit_maintenance_interrupt(void)
>> +{
>> +    release_irq(gic_hw_ops->info->maintenance_irq, NULL);
>> +}
>> +
>>   int gic_make_hwdom_dt_node(const struct domain *d,
>>                              const struct dt_device_node *gic,
>>                              void *fdt)
>> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
>> index abc642804f..449fefc77d 100644
>> --- a/xen/arch/arm/smpboot.c
>> +++ b/xen/arch/arm/smpboot.c
>> @@ -375,11 +375,18 @@ void __cpu_disable(void)
>>         local_irq_disable();
>>       gic_disable_cpu();
>> +
>
>
> Spurious change.
>
>>       /* Allow any queued timer interrupts to get serviced */
>>       local_irq_enable();
>>       mdelay(1);
>>       local_irq_disable();
>>   +    /*
>> +     * Deinitialize interrupts (this will free the memory that was
>> allocated
>> +     * in respective init interrupt functions called from
>> start_secondary)
>> +     */
>> +    deinit_maintenance_interrupt();
>
>
> Can you have a look at using a notifier (see CPU_DIYING)? This would avoid
> exporting too much new function.

I believe releasing of maintenance irq should happen after the dying
CPU's GIC interface is disabled.
To make such ordering using notifiers I would need to move these lines
from __cpu_disable into the notifier callback under the CPU_DYING
case:
        local_irq_disable();
        gic_disable_cpu();
        local_irq_enable();
then below these lines in the callback I would add
        release_irq(gic_hw_ops->info->maintenance_irq, NULL);

This would have to be done because CPU_DYING notifiers execute before
__cpu_disable().
How that sounds? If it's ok, should these changes be split into 2
patches (1) notifier based call to gic_disable_cpu + 2) release
maintenance irq, I believe this is better) or should I merge them?

Thanks,
Mirela

>
>> +
>>       /* It's now safe to remove this processor from the online map */
>>       cpumask_clear_cpu(cpu, &cpu_online_map);
>>   diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
>> index 58b910fe6a..0db42e6cce 100644
>> --- a/xen/include/asm-arm/gic.h
>> +++ b/xen/include/asm-arm/gic.h
>> @@ -254,6 +254,7 @@ extern void gic_clear_pending_irqs(struct vcpu *v);
>>   extern int vgic_vcpu_pending_irq(struct vcpu *v);
>>     extern void init_maintenance_interrupt(void);
>> +extern void deinit_maintenance_interrupt(void);
>>   extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
>>           unsigned int priority);
>>   extern void gic_raise_inflight_irq(struct vcpu *v, unsigned int
>> virtual_irq);
>>
>
> Cheers,
>
> --
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.