[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP fix
Is it a level or an edge irq? On Wed, 29 Jan 2014, Julien Grall wrote: > Hi, > > It's weird, physical IRQ should not be injected twice ... > Were you able to print the IRQ number? > > In any case, you are using the old version of the interrupt patch series. > Your new error may come of race condition in this code. > > Can you try to use a newest version? > > On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" > <oleksandr.tyshchenko@xxxxxxxxxxxxxxx> wrote: > > Right, that's why changing it to cpumask_of(0) shouldn't make any > > difference for xen-unstable (it should make things clearer, if nothing > > else) but it should fix things for Oleksandr. > > Unfortunately, it is not enough for stable work. > > I was tried to use cpumask_of(smp_processor_id()) instead of > cpumask_of(0) in > gic_route_irq_to_guest(). And as result, I don't see our situation > which cause to deadlock in on_selected_cpus function (expected). > But, hypervisor sometimes hangs somewhere else (I have not identified > yet where this is happening) or I sometimes see traps, like that: > ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them) > > (XEN) CPU1: Unexpected Trap: Undefined Instruction > (XEN) ----[ Xen-4.4-unstable Âarm32 Âdebug=y ÂNot tainted ]---- > (XEN) CPU: Â Â1 > (XEN) PC: Â Â 00242c1c __warn+0x20/0x28 > (XEN) CPSR: Â 200001da MODE:Hypervisor > (XEN) Â Â ÂR0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff > (XEN) Â Â ÂR4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000 > (XEN) Â Â ÂR8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc > R12:00000002 > (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c > (XEN) > (XEN) Â VTCR_EL2: 80002558 > (XEN) ÂVTTBR_EL2: 00020000dec6a000 > (XEN) > (XEN) ÂSCTLR_EL2: 30cd187f > (XEN) Â ÂHCR_EL2: 00000000000028b5 > (XEN) ÂTTBR0_EL2: 00000000d2014000 > (XEN) > (XEN) Â ÂESR_EL2: 00000000 > (XEN) ÂHPFAR_EL2: 0000000000482110 > (XEN) Â Â ÂHDFAR: fa211190 > (XEN) Â Â ÂHIFAR: 00000000 > (XEN) > (XEN) Xen stack trace from sp=4bfd7eb4: > (XEN) Â Â0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 > 00000001 > (XEN) Â Â00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 > 00000000 > (XEN) Â Â40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 > 00000000 > (XEN) Â Â00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 > 00000097 > (XEN) Â Â00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 > 4bfd7f58 > (XEN) Â Â00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 > 00000000 > (XEN) Â Âfa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff > b6efbca3 > (XEN) Â Âc03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 > c007680c > (XEN) Â Âc060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 > 00000000 > (XEN) Â Â00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 > 00000000 > (XEN) Â Âffeffbfe fedeefff fffd5ffe > (XEN) Xen call trace: > (XEN) Â Â[<00242c1c>] __warn+0x20/0x28 (PC) > (XEN) Â Â[<00242c1c>] __warn+0x20/0x28 (LR) > (XEN) Â Â[<00247a54>] maintenance_interrupt+0xfc/0x2f4 > (XEN) Â Â[<00248e60>] do_IRQ+0x138/0x198 > (XEN) Â Â[<00248978>] gic_interrupt+0x58/0xc0 > (XEN) Â Â[<0024f4b8>] do_trap_irq+0x10/0x14 > (XEN) Â Â[<00251830>] return_from_trap+0/0x4 > (XEN) > > Also I am posting maintenance_interrupt() from my tree: > > static void maintenance_interrupt(int irq, void *dev_id, struct > cpu_user_regs *regs) > { > Â Â int i = 0, virq, pirq; > Â Â uint32_t lr; > Â Â struct vcpu *v = current; > Â Â uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) > << 32); > > Â Â while ((i = find_next_bit((const long unsigned int *) &eisr, > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 64, i)) < 64) { > Â Â Â Â struct pending_irq *p, *n; > Â Â Â Â int cpu, eoi; > > Â Â Â Â cpu = -1; > Â Â Â Â eoi = 0; > > Â Â Â Â spin_lock_irq(&gic.lock); > Â Â Â Â lr = GICH[GICH_LR + i]; > Â Â Â Â virq = lr & GICH_LR_VIRTUAL_MASK; > > Â Â Â Â p = irq_to_pending(v, virq); > Â Â Â Â if ( p->desc != NULL ) { > Â Â Â Â Â Â p->desc->status &= ~IRQ_INPROGRESS; > Â Â Â Â Â Â /* Assume only one pcpu needs to EOI the irq */ > Â Â Â Â Â Â cpu = p->desc->arch.eoi_cpu; > Â Â Â Â Â Â eoi = 1; > Â Â Â Â Â Â pirq = p->desc->irq; > Â Â Â Â } > Â Â Â Â if ( !atomic_dec_and_test(&p->inflight_cnt) ) > Â Â Â Â { > Â Â Â Â Â Â /* Physical IRQ can't be reinject */ > Â Â Â Â Â Â WARN_ON(p->desc != NULL); > Â Â Â Â Â Â gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority); > Â Â Â Â Â Â spin_unlock_irq(&gic.lock); > Â Â Â Â Â Â i++; > Â Â Â Â Â Â continue; > Â Â Â Â } > > Â Â Â Â GICH[GICH_LR + i] = 0; > Â Â Â Â clear_bit(i, &this_cpu(lr_mask)); > > Â Â Â Â if ( !list_empty(&v->arch.vgic.lr_pending) ) { > Â Â Â Â Â Â n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), > lr_queue); > Â Â Â Â Â Â gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority); > Â Â Â Â Â Â list_del_init(&n->lr_queue); > Â Â Â Â Â Â set_bit(i, &this_cpu(lr_mask)); > Â Â Â Â } else { > Â Â Â Â Â Â gic_inject_irq_stop(); > Â Â Â Â } > Â Â Â Â spin_unlock_irq(&gic.lock); > > Â Â Â Â spin_lock_irq(&v->arch.vgic.lock); > Â Â Â Â list_del_init(&p->inflight); > Â Â Â Â spin_unlock_irq(&v->arch.vgic.lock); > > Â Â Â Â if ( eoi ) { > Â Â Â Â Â Â /* this is not racy because we can't receive another irq of > the > Â Â Â Â Â Â Â* same type until we EOI it. Â*/ > Â Â Â Â Â Â if ( cpu == smp_processor_id() ) > Â Â Â Â Â Â Â Â gic_irq_eoi((void*)(uintptr_t)pirq); > Â Â Â Â Â Â else > Â Â Â Â Â Â Â Â on_selected_cpus(cpumask_of(cpu), > Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âgic_irq_eoi, (void*)(uintptr_t)pirq, > 0); > Â Â Â Â } > > Â Â Â Â i++; > Â Â } > } > > > Oleksandr Tyshchenko | Embedded Developer > GlobalLogic > > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |