[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen,irq: Clean up __clear_irq_vector



On 28/09/11 14:46, George Dunlap wrote:
> Fix and clean up the logic to __clear_irq_vector().
>
> We always need to clear the things related to cfg->vector.
>
> If the IRQ is currently in motion, then we need to also clear
> out things related to cfg->old_vector.
>
> This patch reorganizes the function to make the parallels between
> the two clean-ups more obvious.
>
> The main functional change here is with cfg->used_vectors; make
> sure to clear cfg->vector always (even if !cfg->move_in_progress);
> if cfg->move_in_progress, clear cfg->old_vector as well.
>
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

> diff -r 59c7213b5949 -r b8359ca81cd9 xen/arch/x86/irq.c
> --- a/xen/arch/x86/irq.c      Tue Sep 27 18:39:15 2011 +0100
> +++ b/xen/arch/x86/irq.c      Wed Sep 28 14:46:12 2011 +0100
> @@ -211,33 +211,23 @@ static void dynamic_irq_cleanup(unsigned
>  
>  static void __clear_irq_vector(int irq)
>  {
> -    int cpu, vector;
> +    int cpu, vector, old_vector;
>      cpumask_t tmp_mask;
>      struct irq_cfg *cfg = irq_cfg(irq);
>  
>      BUG_ON(!cfg->vector);
>  
> +    /* Always clear cfg->vector */
>      vector = cfg->vector;
>      cpus_and(tmp_mask, cfg->cpu_mask, cpu_online_map);
>  
> -    trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, &tmp_mask);
> -
> -    for_each_cpu_mask(cpu, tmp_mask)
> +    for_each_cpu_mask(cpu, tmp_mask) {
> +        ASSERT( per_cpu(vector_irq, cpu)[vector] == irq );
>          per_cpu(vector_irq, cpu)[vector] = -1;
> +    }
>  
>      cfg->vector = IRQ_VECTOR_UNASSIGNED;
>      cpus_clear(cfg->cpu_mask);
> -    cfg->used = IRQ_UNUSED;
> -
> -    if (likely(!cfg->move_in_progress))
> -        return;
> -
> -    cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);
> -    for_each_cpu_mask(cpu, tmp_mask) {
> -        ASSERT( per_cpu(vector_irq, cpu)[cfg->old_vector] == irq );
> -        TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, vector, cpu);
> -        per_cpu(vector_irq, cpu)[cfg->old_vector] = -1;
> -     }
>  
>      if ( cfg->used_vectors )
>      {
> @@ -245,9 +235,33 @@ static void __clear_irq_vector(int irq)
>          clear_bit(vector, cfg->used_vectors);
>      }
>  
> -    cfg->move_in_progress = 0;
> +    cfg->used = IRQ_UNUSED;
> +
> +    trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, &tmp_mask);
> +
> +    if (likely(!cfg->move_in_progress))
> +        return;
> +
> +    /* If we were in motion, also clear cfg->old_vector */
> +    old_vector = cfg->old_vector;
> +    cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);
> +
> +    for_each_cpu_mask(cpu, tmp_mask) {
> +        ASSERT( per_cpu(vector_irq, cpu)[old_vector] == irq );
> +        TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> +        per_cpu(vector_irq, cpu)[old_vector] = -1;
> +     }
> +
>      cfg->old_vector = IRQ_VECTOR_UNASSIGNED;
>      cpus_clear(cfg->old_cpu_mask);
> +
> +    if ( cfg->used_vectors )
> +    {
> +        ASSERT(test_bit(old_vector, cfg->used_vectors));
> +        clear_bit(old_vector, cfg->used_vectors);
> +    }
> +
> +    cfg->move_in_progress = 0;
>  }
>  
>  void clear_irq_vector(int irq)
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.