[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/irq: do not release irq until all cleanup is done


  • To: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 15 Nov 2022 17:29:47 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=d/ToeCBs/ctBpsv5DSunBI+ntKYk3f12ch8LB59ViBs=; b=cX7JuKkGB9KAxagfJ7/PFnI3349NEDOdlgsKhQSZOCM6JP0MDSf0DG6yDr69SjBvWN3bYOmZweqZVSmQtolh5cCsvKohmR/kdo1FJTjfJQ4oTO/QO4NfUd4sSHE5NJpLUsOotWZZIroeMtBb0COP9vo/ptiy/2et+D1xIhEH+jzWOJ8vvYCooK0n4JVhVB/cJTFUSU7wEjg3R33gBnRpZjWEz81dfzT9dyuvedFXORUu/gpIALtXX8iFKo5WpHHoiaWpRLEpVH9KXtupmCPIxeLaNSGzyB+VCu/a8iOtfp2cFP0HE7GIrbNBSsC4a9K6oytCBbLB2fUfeKuJ+Uo3fw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UcxJuAIIMay5GurIqjnROmYhKTusSZ6lLTffaZJGOuUFnaAZYO4zjD1Yo1QiUkHn2bvynDd0czgg0RivmzlyWunxm3Cb0c/PoQutHlYz8uh2NaJordWPUyvyjWwhqIHZHtdV4xTyb9McGU08SKQwu/zHsF2/AZzzlBGvEwff3GYUW/65Z7uSThIzuL5Udue5HxFrJbj1jRG9ovaX2NIDDsOQaJF4TteW/ktyA2BDvWiPg7SFJ4IfKFvfZmAJDpz/EyEjPtJifR7XbxYhUduNsM4/zJjlTLLHZNHaFx1B/0oFATE1HHy1DE+8S2B6QbnNZZMVKCY6G9ng0afnAwb3lQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 15 Nov 2022 16:30:00 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15.11.2022 13:35, Roger Pau Monne wrote:
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -220,27 +220,27 @@ static void _clear_irq_vector(struct irq_desc *desc)
>          clear_bit(vector, desc->arch.used_vectors);
>      }
>  
> -    desc->arch.used = IRQ_UNUSED;
> -
> -    trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask);

The use of tmp_mask here was ...

> +    if ( unlikely(desc->arch.move_in_progress) )
> +    {
> +        /* If we were in motion, also clear desc->arch.old_vector */
> +        old_vector = desc->arch.old_vector;
> +        cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);

.. before the variable was updated here. Now you move it past this
update (to the very end of the function). I wonder whether, to keep
things simple in this change, it would be tolerable to leave the
trace_irq_mask() invocation where it was, despite cleanup not having
completed yet at that point. (The alternative would look to be to
act directly on desc->arch.old_cpu_mask in the code you re-indent,
leaving tmp_mask alone. I think that ought to be okay since nothing
else should access that mask anymore. We could even avoid altering
the field, by going from cpumask_and() to a cpu_online() check in
the for_each_cpu() body.)

>  
> -    if ( likely(!desc->arch.move_in_progress) )
> -        return;
> +        for_each_cpu(cpu, tmp_mask)
> +        {
> +            ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
> +            TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> +            per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> +        }
>  
> -    /* If we were in motion, also clear desc->arch.old_vector */
> -    old_vector = desc->arch.old_vector;
> -    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
> +        release_old_vec(desc);
>  
> -    for_each_cpu(cpu, tmp_mask)
> -    {
> -        ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
> -        TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> -        per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> +        desc->arch.move_in_progress = 0;
>      }
>  
> -    release_old_vec(desc);
> +    write_atomic(&desc->arch.used, IRQ_UNUSED);

Now that there is an ordering requirement between these last two writes,
I think you want to add smp_wmb() after the first one (still inside the
inner scope). write_atomic() is only a volatile asm() (which the compiler
may move under certain conditions) and an access through a volatile
pointer (which isn't ordered with non-volatile memory accesses), but it
is specifically not a memory barrier.

Jan

> -    desc->arch.move_in_progress = 0;
> +    trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask);
>  }
>  
>  void __init clear_irq_vector(int irq)




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.