[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] x86 HPET MSI IRQs vs resume from S3



Jan Beulich wrote on 2011-01-28:
> Going through hpet_broadcast_init(), I see that hpet_setup_msi_irq()
> gets called during resume, thus causing setup_irq() to be called. I'm
> failing to spot the corresponding release_irq(), and hence I can't see
> how this whole code path is supposed to work during resume (other than
> always falling back to using legacy_hpet_event). Instead I'm wondering
> whether in the resume case only msi_compose_msg()/
> hpet_msi_write() should be called for each IRQ used rather than the
> whole hpet_broadcast_init().

I do think below patch could resolve this issue well. Didn't create a new path 
for hpet broadcast init while resume because there exists many condition checks 
in existing path.

diff -r 67f2fed57034 xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c       Fri Feb 11 18:22:37 2011 +0000
+++ b/xen/arch/x86/hpet.c       Tue Feb 15 14:48:54 2011 +0800
@@ -367,12 +367,20 @@ static int hpet_setup_msi_irq(unsigned i
     int ret;
     struct msi_msg msg;
     struct hpet_event_channel *ch = &hpet_events[irq_to_channel(irq)];
-
-    irq_desc[irq].handler = &hpet_msi_type;
-    ret = request_irq(irq, hpet_interrupt_handler,
-                      0, "HPET", ch);
-    if ( ret < 0 )
-        return ret;
+    irq_desc_t *desc = irq_to_desc(irq);
+
+    if ( desc->handler == &no_irq_type )
+    {
+        desc->handler = &hpet_msi_type;
+        ret = request_irq(irq, hpet_interrupt_handler,
+                          0, "HPET", ch);
+        if ( ret < 0 )
+            return ret;
+    }
+    else if ( desc->handler != &hpet_msi_type )
+    {
+        return -EINVAL;
+    }
 
     msi_compose_msg(NULL, irq, &msg);
     hpet_msi_write(irq, &msg);

Jimmy



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.