[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86: Fix up irq vector map logic



# HG changeset patch
# User George Dunlap <george.dunlap@xxxxxxxxxxxxx>
# Date 1314026133 -3600
# Node ID 3a05da2dc7c0a5fc0fcfc40c535d1fcb71203625
# Parent  d1cd78a73a79e0e648937322cdb8d92a7f86327a
x86: Fix up irq vector map logic

We need to make sure that cfg->used_vector is only cleared once;
otherwise there may be a race condition that allows the same vector to
be assigned twice, defeating the whole purpose of the map.

This makes two changes:
* __clear_irq_vector() only clears the vector if the irq is not being
moved
* smp_iqr_move_cleanup_interrupt() only clears used_vector if this
is the last place it's being used (move_cleanup_count==0 after
decrement).

Also make use of asserts more consistent, to catch this kind of logic
bug in the future.

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
---


diff -r d1cd78a73a79 -r 3a05da2dc7c0 xen/arch/x86/io_apic.c
--- a/xen/arch/x86/io_apic.c    Mon Aug 22 16:15:19 2011 +0100
+++ b/xen/arch/x86/io_apic.c    Mon Aug 22 16:15:33 2011 +0100
@@ -485,12 +485,14 @@
                  irq, vector, smp_processor_id());
 
         __get_cpu_var(vector_irq)[vector] = -1;
-        if ( cfg->used_vectors )
+        cfg->move_cleanup_count--;
+
+        if ( cfg->move_cleanup_count == 0 
+             &&  cfg->used_vectors )
         {
             ASSERT(test_bit(vector, cfg->used_vectors));
             clear_bit(vector, cfg->used_vectors);
         }
-        cfg->move_cleanup_count--;
 unlock:
         spin_unlock(&desc->lock);
     }
diff -r d1cd78a73a79 -r 3a05da2dc7c0 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c        Mon Aug 22 16:15:19 2011 +0100
+++ b/xen/arch/x86/irq.c        Mon Aug 22 16:15:33 2011 +0100
@@ -113,7 +113,10 @@
     cfg->vector = vector;
     cfg->cpu_mask = online_mask;
     if ( cfg->used_vectors )
+    {
+        ASSERT(!test_bit(vector, cfg->used_vectors));
         set_bit(vector, cfg->used_vectors);
+    }
     irq_status[irq] = IRQ_USED;
     if (IO_APIC_IRQ(irq))
         irq_vector[irq] = vector;
@@ -207,15 +210,13 @@
     for_each_cpu_mask(cpu, tmp_mask)
         per_cpu(vector_irq, cpu)[vector] = -1;
 
-    if ( cfg->used_vectors )
-        clear_bit(vector, cfg->used_vectors);
-
     cfg->vector = IRQ_VECTOR_UNASSIGNED;
     cpus_clear(cfg->cpu_mask);
     init_one_irq_status(irq);
 
     if (likely(!cfg->move_in_progress))
         return;
+
     cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);
     for_each_cpu_mask(cpu, tmp_mask) {
         for (vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_DYNAMIC_VECTOR;
@@ -229,6 +230,12 @@
         }
      }
 
+    if ( cfg->used_vectors )
+    {
+        ASSERT(test_bit(vector, cfg->used_vectors));
+        clear_bit(vector, cfg->used_vectors);
+    }
+
     cfg->move_in_progress = 0;
 }
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.