[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 1/3] xenoprof fixes: sleep with interrupts disabled


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Markus Armbruster <armbru@xxxxxxxxxx>
  • Date: Mon, 15 May 2006 19:33:51 +0200
  • Delivery-date: Mon, 15 May 2006 10:34:13 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Ill-advised use of on_each_cpu() can lead to sleep with interrupts
disabled.  This is best explained with a backtrace:

BUG: sleeping function called from invalid context at 
/home/armbru/linux-2.6.tip-xen-quintela.hg/mm/slab.c:2783
in_atomic():0, irqs_disabled():1
 [<c0104f36>] show_trace+0x13/0x15
 [<c0105439>] dump_stack+0x18/0x1c
 [<c01161ac>] __might_sleep+0x9f/0xa7
 [<c0158823>] __kmalloc+0x65/0x110
 [<c018e01d>] proc_create+0x8d/0xe2
 [<c018e12d>] proc_mkdir_mode+0x1a/0x4d
 [<c018e173>] proc_mkdir+0x13/0x15
 [<c013de9b>] register_handler_proc+0x90/0x9e
 [<c013d729>] setup_irq+0xdf/0xf5
 [<c013d7a9>] request_irq+0x6a/0x8a
 [<c0231817>] bind_virq_to_irqhandler+0xe3/0x101
 [<f4c83b02>] bind_virq_cpu+0x28/0x62 [oprofile]
 [<c0121453>] on_each_cpu+0x36/0x5d
 [<f4c83c3b>] xenoprof_setup+0x22/0x14a [oprofile]
 [<f4c820bf>] oprofile_setup+0x45/0x8b [oprofile]
 [<f4c82fa3>] event_buffer_open+0x3e/0x62 [oprofile]
 [<c0159eda>] __dentry_open+0xc7/0x1b0
 [<c015a034>] nameidata_to_filp+0x20/0x34
 [<c015a078>] do_filp_open+0x30/0x39
 [<c015afe6>] do_sys_open+0x40/0xb0
 [<c015b07f>] sys_open+0x13/0x15
 [<c01048cf>] syscall_call+0x7/0xb

on_each_cpu() disables interrupts.  proc_create() calls passes
GFP_KERNEL to kmalloc().

The patch converts from on_each_cpu() to for_each_cpu(), and then
simplifies things.

Signed-off-by: Markus Armbruster <armbru@xxxxxxxxxx>


diff -rup linux-2.6.16.13-xen/arch/i386/oprofile/xenoprof.c 
linux-2.6.16.13-xen.patched-1/arch/i386/oprofile/xenoprof.c
--- linux-2.6.16.13-xen/arch/i386/oprofile/xenoprof.c   2006-05-15 
10:32:22.000000000 +0200
+++ linux-2.6.16.13-xen.patched-1/arch/i386/oprofile/xenoprof.c 2006-05-15 
10:30:49.000000000 +0200
@@ -141,56 +141,40 @@ xenoprof_ovf_interrupt(int irq, void * d
 }
 
 
-static void unbind_virq_cpu(void * info)
-{
-       int cpu = smp_processor_id();
-       if (ovf_irq[cpu] >= 0) {
-               unbind_from_irqhandler(ovf_irq[cpu], NULL);
-               ovf_irq[cpu] = -1;
-       }
-}
-
-
 static void unbind_virq(void)
 {
-       on_each_cpu(unbind_virq_cpu, NULL, 0, 1);
-}
-
-
-int bind_virq_error;
-
-static void bind_virq_cpu(void * info)
-{
-       int result;
-       int cpu = smp_processor_id();
+       int i;
 
-       result = bind_virq_to_irqhandler(VIRQ_XENOPROF,
-                                        cpu,
-                                        xenoprof_ovf_interrupt,
-                                        SA_INTERRUPT,
-                                        "xenoprof",
-                                        NULL);
-
-       if (result<0) {
-               bind_virq_error = result;
-               printk("xenoprof.c: binding VIRQ_XENOPROF to IRQ failed on CPU "
-                      "%d\n", cpu);
-       } else {
-               ovf_irq[cpu] = result;
+       for_each_cpu(i) {
+               if (ovf_irq[i] >= 0) {
+                       unbind_from_irqhandler(ovf_irq[i], NULL);
+                       ovf_irq[i] = -1;
+               }
        }
 }
 
 
 static int bind_virq(void)
 {
-       bind_virq_error = 0;
-       on_each_cpu(bind_virq_cpu, NULL, 0, 1);
-       if (bind_virq_error) {
-               unbind_virq();
-               return bind_virq_error;
-       } else {
-               return 0;
+       int i, result;
+
+       for_each_cpu(i) {
+               result = bind_virq_to_irqhandler(VIRQ_XENOPROF,
+                                                i,
+                                                xenoprof_ovf_interrupt,
+                                                SA_INTERRUPT,
+                                                "xenoprof",
+                                                NULL);
+
+               if (result < 0) {
+                       unbind_virq();
+                       return result;
+               }
+
+               ovf_irq[i] = result;
        }
+               
+       return 0;
 }
 
 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.