[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86, irq: Clean up __clear_irq_vector



# HG changeset patch
# User Keir Fraser <keir@xxxxxxx>
# Date 1317413721 -3600
# Node ID d568e2313fd6f055b66a6c3cb2bca6372b77692e
# Parent  a50da1a6423fc4cbbefe9ff391b5aab668170044
x86,irq: Clean up __clear_irq_vector

Fix and clean up the logic to __clear_irq_vector().

We always need to clear the things related to cfg->vector.

If the IRQ is currently in motion, then we need to also clear
out things related to cfg->old_vector.

This patch reorganizes the function to make the parallels between
the two clean-ups more obvious.

The main functional change here is with cfg->used_vectors; make
sure to clear cfg->vector always (even if !cfg->move_in_progress);
if cfg->move_in_progress, clear cfg->old_vector as well.

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---


diff -r a50da1a6423f -r d568e2313fd6 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c        Fri Sep 30 21:14:34 2011 +0100
+++ b/xen/arch/x86/irq.c        Fri Sep 30 21:15:21 2011 +0100
@@ -211,33 +211,23 @@
 
 static void __clear_irq_vector(int irq)
 {
-    int cpu, vector;
+    int cpu, vector, old_vector;
     cpumask_t tmp_mask;
     struct irq_cfg *cfg = irq_cfg(irq);
 
     BUG_ON(!cfg->vector);
 
+    /* Always clear cfg->vector */
     vector = cfg->vector;
     cpus_and(tmp_mask, cfg->cpu_mask, cpu_online_map);
 
-    trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, &tmp_mask);
-
-    for_each_cpu_mask(cpu, tmp_mask)
+    for_each_cpu_mask(cpu, tmp_mask) {
+        ASSERT( per_cpu(vector_irq, cpu)[vector] == irq );
         per_cpu(vector_irq, cpu)[vector] = -1;
+    }
 
     cfg->vector = IRQ_VECTOR_UNASSIGNED;
     cpus_clear(cfg->cpu_mask);
-    cfg->used = IRQ_UNUSED;
-
-    if (likely(!cfg->move_in_progress))
-        return;
-
-    cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);
-    for_each_cpu_mask(cpu, tmp_mask) {
-        ASSERT( per_cpu(vector_irq, cpu)[cfg->old_vector] == irq );
-        TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, vector, cpu);
-        per_cpu(vector_irq, cpu)[cfg->old_vector] = -1;
-     }
 
     if ( cfg->used_vectors )
     {
@@ -245,9 +235,33 @@
         clear_bit(vector, cfg->used_vectors);
     }
 
-    cfg->move_in_progress = 0;
+    cfg->used = IRQ_UNUSED;
+
+    trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, &tmp_mask);
+
+    if (likely(!cfg->move_in_progress))
+        return;
+
+    /* If we were in motion, also clear cfg->old_vector */
+    old_vector = cfg->old_vector;
+    cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);
+
+    for_each_cpu_mask(cpu, tmp_mask) {
+        ASSERT( per_cpu(vector_irq, cpu)[old_vector] == irq );
+        TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
+        per_cpu(vector_irq, cpu)[old_vector] = -1;
+     }
+
     cfg->old_vector = IRQ_VECTOR_UNASSIGNED;
     cpus_clear(cfg->old_cpu_mask);
+
+    if ( cfg->used_vectors )
+    {
+        ASSERT(test_bit(old_vector, cfg->used_vectors));
+        clear_bit(old_vector, cfg->used_vectors);
+    }
+
+    cfg->move_in_progress = 0;
 }
 
 void clear_irq_vector(int irq)

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.