[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen arm/arm64: minor improvement in smp_send_call_function_mask()
On Mon, 2014-08-25 at 15:48 +0530, Anup Patel wrote: > Currently, smp_send_call_function_mask() function implemented > by xen arm/arm64 will use IPI to call function on current CPU. > > This means that current smp_send_call_function_mask() will do > the following on current CPU: > Trigger SGI for current CPU => Xen takes interrupt on current CPU > => IPI interrupt handler will call smp_call_function_interrupt() > > This patch improves the above by straight away calling > smp_call_function_interrupt() for current CPU. This is very > similar to smp_send_call_function_mask() implemented by Xen x86. > > Changes since v1: > - Drop the check protecting send_SGI_mask() call Please put this sort of thing after the --- break so it doesn't end up in the final commit log. > > Signed-off-by: Anup Patel <anup.patel@xxxxxxxxxx> > Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@xxxxxxxxxx> > Acked-by: Julien Grall <julien.grall@xxxxxxxxxx> Acked + applied. thanks. > --- > xen/arch/arm/smp.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c > index 30203b8..917d490 100644 > --- a/xen/arch/arm/smp.c > +++ b/xen/arch/arm/smp.c > @@ -19,7 +19,18 @@ void smp_send_event_check_mask(const cpumask_t *mask) > > void smp_send_call_function_mask(const cpumask_t *mask) > { > - send_SGI_mask(mask, GIC_SGI_CALL_FUNCTION); > + cpumask_t target_mask; > + > + cpumask_andnot(&target_mask, mask, cpumask_of(smp_processor_id())); > + > + send_SGI_mask(&target_mask, GIC_SGI_CALL_FUNCTION); > + > + if ( cpumask_test_cpu(smp_processor_id(), mask) ) > + { > + local_irq_disable(); > + smp_call_function_interrupt(); > + local_irq_enable(); > + } > } > > /* _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |