[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] IO-APIC: tweak debug key info formatting



>>> On 03.02.12 at 14:30, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>Furthermore, printing fewer characters makes it less likely that the
>serial buffer will overflow resulting in loss of critical debugging
>information.

For that part, shortening some of the strings would certainly be
desirable too (delivery_mode being the worst).

>--- a/xen/arch/x86/io_apic.c
>+++ b/xen/arch/x86/io_apic.c
>@@ -2406,13 +2406,13 @@ void dump_ioapic_irq_info(void)
>             *(((int *)&rte) + 1) = io_apic_read(entry->apic, 0x11 + 2 * pin);
>             spin_unlock_irqrestore(&ioapic_lock, flags);
> 
>-            printk("vector=%u, delivery_mode=%u, dest_mode=%s, "
>-                   "delivery_status=%d, polarity=%d, irr=%d, "
>-                   "trigger=%s, mask=%d, dest_id:%d\n",
>+            printk("vector=%3u delivery_mode=%u dest_mode=%s "

Could you please print the vector as %02x instead? We should really
do this consistently everywhere, and vectors in decimal are pretty
meaningless anyway (as one will always need to convert them for
purposes of priority determination or comparison with #define-s in
the sources).

Jan

>+                   "delivery_status=%d polarity=%d irr=%d "
>+                   "trigger=%s mask=%d dest_id:%d\n",
>                    rte.vector, rte.delivery_mode,
>-                   rte.dest_mode ? "logical" : "physical",
>+                   rte.dest_mode ? "logical " : "physical",
>                    rte.delivery_status, rte.polarity, rte.irr,
>-                   rte.trigger ? "level" : "edge", rte.mask,
>+                   rte.trigger ? "level" : "edge ", rte.mask,
>                    rte.dest.logical.logical_dest);
> 
>             if ( entry->next == 0 )



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.