[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/5] xen/arm: Keep count of inflight interrupts
On Tue, 2013-06-25 at 00:04 +0100, Julien Grall wrote: > From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx> > > For guest's timers (both virtual and physical), Xen will inject virtual > interrupt. Linux handles timer interrupt as: We should be wary of developing things based on the way which Linux happens to do things. On x86 we have several time modes, which can be selected based upon guest behaviour (described in docs/man/xl.cfg.pod.5). Do we need to do something similar here? > 1) Receive the interrupt and ack it > 2) Handle the current event timer > 3) Set the next event timer > 4) EOI the interrupt > > It's unlikely possible to reinject another interrupt before I can't parse this sentence. "unlikely to be possible" perhaps? but I'm not sure if that is what you meant. > the previous one is EOIed because the next deadline is shorter than the time > to execute code until EOI it. If we get into this situation once is there any danger that we will get into it repeatedly and overflow the count? Overall I'm not convinced this is the right approach to get the behaviour we want. Isn't this interrupt level triggered, with the level being determined by a comparison of two registers? IOW can't we determine whether to retrigger the interrupt or not by examining the state of our emulated versions of those registers? A generic mechanism to callback into the appropriate emulator on EOI plus a little bit of logic in the vtimer code is all it ought to take. > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx> > Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> > --- > xen/arch/arm/gic.c | 35 +++++++++++++++++++++++------------ > xen/arch/arm/vgic.c | 1 + > xen/include/asm-arm/domain.h | 2 ++ > 3 files changed, 26 insertions(+), 12 deletions(-) > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index 0fee3f2..21575df 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -817,7 +817,7 @@ static void maintenance_interrupt(int irq, void *dev_id, > struct cpu_user_regs *r > > while ((i = find_next_bit((const long unsigned int *) &eisr, > 64, i)) < 64) { > - struct pending_irq *p; > + struct pending_irq *p, *n; > int cpu, eoi; > > cpu = -1; > @@ -826,13 +826,32 @@ static void maintenance_interrupt(int irq, void > *dev_id, struct cpu_user_regs *r > spin_lock_irq(&gic.lock); > lr = GICH[GICH_LR + i]; > virq = lr & GICH_LR_VIRTUAL_MASK; > + > + p = irq_to_pending(v, virq); > + if ( p->desc != NULL ) { > + p->desc->status &= ~IRQ_INPROGRESS; > + /* Assume only one pcpu needs to EOI the irq */ > + cpu = p->desc->arch.eoi_cpu; > + eoi = 1; > + pirq = p->desc->irq; > + } > + if ( !atomic_dec_and_test(&p->inflight_cnt) ) > + { > + /* Physical IRQ can't be reinject */ > + WARN_ON(p->desc != NULL); > + gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority); > + spin_unlock_irq(&gic.lock); > + i++; > + continue; > + } > + > GICH[GICH_LR + i] = 0; > clear_bit(i, &this_cpu(lr_mask)); > > if ( !list_empty(&v->arch.vgic.lr_pending) ) { > - p = list_entry(v->arch.vgic.lr_pending.next, typeof(*p), > lr_queue); > - gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority); > - list_del_init(&p->lr_queue); > + n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), > lr_queue); > + gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority); > + list_del_init(&n->lr_queue); > set_bit(i, &this_cpu(lr_mask)); > } else { > gic_inject_irq_stop(); > @@ -840,14 +859,6 @@ static void maintenance_interrupt(int irq, void *dev_id, > struct cpu_user_regs *r > spin_unlock_irq(&gic.lock); > > spin_lock_irq(&v->arch.vgic.lock); > - p = irq_to_pending(v, virq); > - if ( p->desc != NULL ) { > - p->desc->status &= ~IRQ_INPROGRESS; > - /* Assume only one pcpu needs to EOI the irq */ > - cpu = p->desc->arch.eoi_cpu; > - eoi = 1; > - pirq = p->desc->irq; > - } > list_del_init(&p->inflight); > spin_unlock_irq(&v->arch.vgic.lock); > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index 7eaccb7..2d91dce 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -672,6 +672,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int > irq, int virtual) > > spin_lock_irqsave(&v->arch.vgic.lock, flags); > > + atomic_inc(&n->inflight_cnt); > /* vcpu offline or irq already pending */ > if (test_bit(_VPF_down, &v->pause_flags) || !list_empty(&n->inflight)) > { > diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h > index 339b6e6..fa0b776 100644 > --- a/xen/include/asm-arm/domain.h > +++ b/xen/include/asm-arm/domain.h > @@ -8,6 +8,7 @@ > #include <asm/p2m.h> > #include <asm/vfp.h> > #include <public/hvm/params.h> > +#include <asm/atomic.h> > > /* Represents state corresponding to a block of 32 interrupts */ > struct vgic_irq_rank { > @@ -21,6 +22,7 @@ struct vgic_irq_rank { > struct pending_irq > { > int irq; > + atomic_t inflight_cnt; > struct irq_desc *desc; /* only set it the irq corresponds to a physical > irq */ > uint8_t priority; > /* inflight is used to append instances of pending_irq to _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |