[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC][PATCH] xen/apic: refactor error_interrupt
>>> On 04.05.15 at 04:03, <tiejun.chen@xxxxxxxxx> wrote: > --- a/xen/arch/x86/apic.c > +++ b/xen/arch/x86/apic.c > @@ -1319,28 +1319,37 @@ out: ; > * This interrupt should never happen with our APIC/SMP architecture > */ > > +static const char *apic_fault_reasons[] = If at all, then this should be const. But... > +{ > + "Send CS error", > + "Receive CS error", > + "Send accept error", > + "Receive accept error", > + "Reserved", > + "Send illegal vector", > + "Received illegal vector", > + "Illegal register address", > +}; > + > +static const char *apic_get_fault_reason(u8 fault_reason) > +{ > + return apic_fault_reasons[fault_reason]; > +} > + > void error_interrupt(struct cpu_user_regs *regs) > { > unsigned long v, v1; > + const char *reason; > > /* First tickle the hardware, only then report what went on. -- REW */ > v = apic_read(APIC_ESR); > apic_write(APIC_ESR, 0); > v1 = apic_read(APIC_ESR); > ack_APIC_irq(); > + reason = apic_get_fault_reason(ffs(v1) - 1); ... I'm not convinced - you're treating errors represented by lower bits "better" than higher ones. And if there are multiple errors, then spelling out one but not the other may end up being confusing. These errors should be rare enough to warrant manually looking up the individual bits' meanings. Jan > - /* Here is what the APIC error bits mean: > - 0: Send CS error > - 1: Receive CS error > - 2: Send accept error > - 3: Receive accept error > - 4: Reserved > - 5: Send illegal vector > - 6: Received illegal vector > - 7: Illegal register address > - */ > - printk (KERN_DEBUG "APIC error on CPU%d: %02lx(%02lx)\n", > - smp_processor_id(), v , v1); > + printk (KERN_DEBUG "APIC error on CPU%d: %02lx(%02lx):%s.\n", > + smp_processor_id(), v , v1, reason); > } > > /* _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |