[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/6] x86/idle: Implement a new MWAIT IPI-elision algorithm



On Wed, Jul 02, 2025 at 03:41:19PM +0100, Andrew Cooper wrote:
> diff --git a/xen/arch/x86/include/asm/softirq.h 
> b/xen/arch/x86/include/asm/softirq.h
> index e4b194f069fb..069e5716a68d 100644
> --- a/xen/arch/x86/include/asm/softirq.h
> +++ b/xen/arch/x86/include/asm/softirq.h
> @@ -1,6 +1,8 @@
>  #ifndef __ASM_SOFTIRQ_H__
>  #define __ASM_SOFTIRQ_H__
>  
> +#include <asm/system.h>
> +
>  #define NMI_SOFTIRQ            (NR_COMMON_SOFTIRQS + 0)
>  #define TIME_CALIBRATE_SOFTIRQ (NR_COMMON_SOFTIRQS + 1)
>  #define VCPU_KICK_SOFTIRQ      (NR_COMMON_SOFTIRQS + 2)
> @@ -9,4 +11,36 @@
>  #define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
>  #define NR_ARCH_SOFTIRQS       5
>  
> +/*
> + * Ensure softirq @nr is pending on @cpu.  Return true if an IPI can be
> + * skipped, false if the IPI cannot be skipped.
> + *
> + * We use a CMPXCHG covering both __softirq_pending and in_mwait, in order to
> + * set softirq @nr while also observing in_mwait in a race-free way.
> + */
> +static always_inline bool arch_pend_softirq(unsigned int nr, unsigned int 
> cpu)
> +{
> +    uint64_t *ptr = &irq_stat[cpu].softirq_mwait_raw;
> +    uint64_t old, new;
> +    unsigned int softirq = 1U << nr;
> +
> +    old = ACCESS_ONCE(*ptr);
> +
> +    do {
> +        if ( old & softirq )
> +            /* Softirq already pending, nothing to do. */
> +            return true;
> +
> +        new = old | softirq;
> +
> +    } while ( (old = cmpxchg(ptr, old, new)) != new );
> +
> +    /*
> +     * We have caused the softirq to become pending.  If in_mwait was set, 
> the
> +     * target CPU will notice the modification and act on it.
> +     */
> +    return new & (1UL << 32);

Maybe I haven't got enough coffee yet, but if we do the cmpxchg()
won't it be enough to send the IPI when softirq is first set, but not
necessarily for each extra softirq that's set if there's already one
enabled?  IOW:

    uint64_t new, softirq = 1U << nr;
    uint64_t old = ACCESS_ONCE(*ptr);

    do {
        if ( old & softirq )
            /* Softirq already pending, nothing to do. */
            return true;

        new = old | softirq;

    } while ( (old = cmpxchg(ptr, old, new)) != (new & ~softirq) );

    /*
     * We have caused the softirq to become pending when it was
     * previously unset.  If in_mwait was set, the target CPU will
     * notice the modification and act on it.
     *
     * Reduce the logic to simply check whether the old value was
     * different than 0: it will either be the in_mwait field or any
     * already pending softirqs.
     */
    return old;

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.