[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/6] xen/softirq: Rework arch_skip_send_event_check() into arch_pend_softirq()



On Wed, Jul 02, 2025 at 03:41:18PM +0100, Andrew Cooper wrote:
> x86 is the only architecture wanting an optimisation here, but the
> test_and_set_bit() is a store into the monitored line and is racy with
> determining whether an IPI can be skipped.
> 
> Instead, implement a new arch helper with different semantics; to make the
> softirq pending and decide about IPIs together.  For now, implement the
> default helper.  It will be overridden by x86 in a subsequent change.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

With the adjusted commit message you proposed to Jan:

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Just one nit below to comment something.

> ---
> CC: Jan Beulich <JBeulich@xxxxxxxx>
> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> CC: Anthony PERARD <anthony.perard@xxxxxxxxxx>
> CC: Michal Orzel <michal.orzel@xxxxxxx>
> CC: Julien Grall <julien@xxxxxxx>
> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> ---
>  xen/arch/x86/acpi/cpu_idle.c       |  5 -----
>  xen/arch/x86/include/asm/softirq.h |  2 --
>  xen/common/softirq.c               |  8 ++------
>  xen/include/xen/softirq.h          | 16 ++++++++++++++++
>  4 files changed, 18 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
> index dbcb80d81db9..caa0fef0b3b1 100644
> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -436,11 +436,6 @@ static int __init cf_check cpu_idle_key_init(void)
>  }
>  __initcall(cpu_idle_key_init);
>  
> -bool arch_skip_send_event_check(unsigned int cpu)
> -{
> -    return false;
> -}
> -
>  void mwait_idle_with_hints(unsigned int eax, unsigned int ecx)
>  {
>      unsigned int cpu = smp_processor_id();
> diff --git a/xen/arch/x86/include/asm/softirq.h 
> b/xen/arch/x86/include/asm/softirq.h
> index 415ee866c79d..e4b194f069fb 100644
> --- a/xen/arch/x86/include/asm/softirq.h
> +++ b/xen/arch/x86/include/asm/softirq.h
> @@ -9,6 +9,4 @@
>  #define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
>  #define NR_ARCH_SOFTIRQS       5
>  
> -bool arch_skip_send_event_check(unsigned int cpu);
> -
>  #endif /* __ASM_SOFTIRQ_H__ */
> diff --git a/xen/common/softirq.c b/xen/common/softirq.c
> index 60f344e8425e..0368011f95d1 100644
> --- a/xen/common/softirq.c
> +++ b/xen/common/softirq.c
> @@ -94,9 +94,7 @@ void cpumask_raise_softirq(const cpumask_t *mask, unsigned 
> int nr)
>          raise_mask = &per_cpu(batch_mask, this_cpu);
>  
>      for_each_cpu(cpu, mask)
> -        if ( !test_and_set_bit(nr, &softirq_pending(cpu)) &&
> -             cpu != this_cpu &&
> -             !arch_skip_send_event_check(cpu) )
> +        if ( !arch_pend_softirq(nr, cpu) && cpu != this_cpu )
>              __cpumask_set_cpu(cpu, raise_mask);
>  
>      if ( raise_mask == &send_mask )
> @@ -107,9 +105,7 @@ void cpu_raise_softirq(unsigned int cpu, unsigned int nr)
>  {
>      unsigned int this_cpu = smp_processor_id();
>  
> -    if ( test_and_set_bit(nr, &softirq_pending(cpu))
> -         || (cpu == this_cpu)
> -         || arch_skip_send_event_check(cpu) )
> +    if ( arch_pend_softirq(nr, cpu) || cpu == this_cpu )
>          return;
>  
>      if ( !per_cpu(batching, this_cpu) || in_irq() )
> diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
> index e9f79ec0ce3e..0814c8e5cd41 100644
> --- a/xen/include/xen/softirq.h
> +++ b/xen/include/xen/softirq.h
> @@ -23,6 +23,22 @@ enum {
>  
>  #define NR_SOFTIRQS (NR_COMMON_SOFTIRQS + NR_ARCH_SOFTIRQS)
>  
> +/*
> + * Ensure softirq @nr is pending on @cpu.  Return true if an IPI can be
> + * skipped, false if the IPI cannot be skipped.
> + */
> +#ifndef arch_pend_softirq
> +static always_inline bool arch_pend_softirq(unsigned int nr, unsigned int 
> cpu)

Nit: I would maybe it arch_set_softirq(), I find `pend` not that clear
(I would rather fully spell `pending` instead).

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.