[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v4] x86: Fix possible ASSERT(cpu < nr_cpu_ids)



From: David Wang <davidwang@xxxxxxxxxxx>

CPUs may share an in-use channel. Hence clearing of a bit from
the cpumask (in hpet_broadcast_exit()) as well as setting one
(in hpet_broadcast_enter()) must not race evaluation of that same
cpumask. Therefore avoid evaluating the cpumask twice in
hpet_detach_channel(). Otherwise cpumask_empty() may e.g.return
false while the subsequent cpumask_first() could return nr_cpu_ids,
which then triggers the assertion in cpumask_of() reached through
set_channel_irq_affinity().

Signed-off-by: David Wang <davidwang@xxxxxxxxxxx>
---
 xen/arch/x86/hpet.c | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index bc7a851..18447db 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -509,6 +509,8 @@ static void hpet_attach_channel(unsigned int cpu,
 static void hpet_detach_channel(unsigned int cpu,
                                 struct hpet_event_channel *ch)
 {
+    unsigned int next;
+
     spin_lock_irq(&ch->lock);
 
     ASSERT(ch == per_cpu(cpu_bc_channel, cpu));
@@ -517,17 +519,21 @@ static void hpet_detach_channel(unsigned int cpu,
 
     if ( cpu != ch->cpu )
         spin_unlock_irq(&ch->lock);
-    else if ( cpumask_empty(ch->cpumask) )
-    {
-        ch->cpu = -1;
-        clear_bit(HPET_EVT_USED_BIT, &ch->flags);
-        spin_unlock_irq(&ch->lock);
-    }
     else
     {
-        ch->cpu = cpumask_first(ch->cpumask);
-        set_channel_irq_affinity(ch);
-        local_irq_enable();
+        next = cpumask_first(ch->cpumask);
+        if( next >= nr_cpu_ids )
+        {
+            ch->cpu = -1;
+            clear_bit(HPET_EVT_USED_BIT, &ch->flags);
+            spin_unlock_irq(&ch->lock);
+        }
+        else
+        {
+            ch->cpu = next;
+            set_channel_irq_affinity(ch);
+            local_irq_enable();
+        }
     }
 }
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.