[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/5] xen/arm: Keep count of inflight interrupts



On 06/25/2013 05:58 PM, Stefano Stabellini wrote:

> On Tue, 25 Jun 2013, Ian Campbell wrote:
>> On Tue, 2013-06-25 at 00:04 +0100, Julien Grall wrote:
>>> From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
>>>
>>> For guest's timers (both virtual and physical), Xen will inject virtual
>>> interrupt. Linux handles timer interrupt as:
>>
>> We should be wary of developing things based on the way which Linux
>> happens to do things. On x86 we have several time modes, which can be
>> selected based upon guest behaviour (described in
>> docs/man/xl.cfg.pod.5). Do we need to do something similar here?
>>
>>>   1) Receive the interrupt and ack it
>>>   2) Handle the current event timer
>>>   3) Set the next event timer
>>>   4) EOI the interrupt
>>>
>>> It's unlikely possible to reinject another interrupt before
>>
>> I can't parse this sentence. "unlikely to be possible" perhaps? but I'm
>> not sure if that is what you meant.
>>
>>> the previous one is EOIed because the next deadline is shorter than the time
>>> to execute code until EOI it.
>>
>> If we get into this situation once is there any danger that we will get
>> into it repeatedly and overflow the count?
>>
>> Overall I'm not convinced this is the right approach to get the
>> behaviour we want. Isn't this interrupt level triggered, with the level
>> being determined by a comparison of two registers? IOW can't we
>> determine whether to retrigger the interrupt or not by examining the
>> state of our emulated versions of those registers? A generic mechanism
>> to callback into the appropriate emulator on EOI plus a little bit of
>> logic in the vtimer code is all it ought to take.
> 
> AFAICT this what could happen:
> 
> - vtimer fires
> - xen mask the vtimer
> - xen adds the vtimer to the LR for the guest
> - xen EOIs the vtimer
> - the guest receive the vtimer interrupt
> - the guest set the next deadline in the past
> - the guest enables the vtimer
> ## an unexpected vtimer interrupt is received by Xen but the current
> ## one is still being serviced 
> - the guest eoi the vtimer
> 
> as a result the guest looses an interrupt.
> Julien, is that correct?

Yes.

> If that is the case, this can only happen once, right?  In that case
> rather than an atomic_t we could just have a bit in the status field I
> proposed before. It should be enough for us to keep track of the case
> when the irq is supposed to stay high even after the guest EOIs it. (Of
> course that means that we need to re-inject it into the guest).


For the timer yes. I wonder, what happen if Xen receive an SGI (for
instance SCHEDULE ipi) twice, does Xen must re-inject it multiple times?

>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
>>> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
>>> ---
>>>  xen/arch/arm/gic.c           |   35 +++++++++++++++++++++++------------
>>>  xen/arch/arm/vgic.c          |    1 +
>>>  xen/include/asm-arm/domain.h |    2 ++
>>>  3 files changed, 26 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index 0fee3f2..21575df 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -817,7 +817,7 @@ static void maintenance_interrupt(int irq, void 
>>> *dev_id, struct cpu_user_regs *r
>>>  
>>>      while ((i = find_next_bit((const long unsigned int *) &eisr,
>>>                                64, i)) < 64) {
>>> -        struct pending_irq *p;
>>> +        struct pending_irq *p, *n;
>>>          int cpu, eoi;
>>>  
>>>          cpu = -1;
>>> @@ -826,13 +826,32 @@ static void maintenance_interrupt(int irq, void 
>>> *dev_id, struct cpu_user_regs *r
>>>          spin_lock_irq(&gic.lock);
>>>          lr = GICH[GICH_LR + i];
>>>          virq = lr & GICH_LR_VIRTUAL_MASK;
>>> +
>>> +        p = irq_to_pending(v, virq);
>>> +        if ( p->desc != NULL ) {
>>> +            p->desc->status &= ~IRQ_INPROGRESS;
> 
> Now that I am thinking about this, shouldn't this be protected by taking
> the desc->lock?

Right. I don't think desc->lock is enough if we want to protect from
release_irq.
If this function is called, it will wait until IRQ_INPROGRESS is
removed. How about moving ~IRQ_INPROGRESS at then end of the block and
add a dmb() before?

>>> +            /* Assume only one pcpu needs to EOI the irq */
>>> +            cpu = p->desc->arch.eoi_cpu;
>>> +            eoi = 1;
>>> +            pirq = p->desc->irq;
>>> +        }
>>> +        if ( !atomic_dec_and_test(&p->inflight_cnt) )
>>> +        {
>>> +            /* Physical IRQ can't be reinject */
>>> +            WARN_ON(p->desc != NULL);
>>> +            gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>> +            spin_unlock_irq(&gic.lock);
>>> +            i++;
>>> +            continue;
>>> +        }
>>> +
>>>          GICH[GICH_LR + i] = 0;
>>>          clear_bit(i, &this_cpu(lr_mask));
>>>  
>>>          if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>>> -            p = list_entry(v->arch.vgic.lr_pending.next, typeof(*p), 
>>> lr_queue);
>>> -            gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>> -            list_del_init(&p->lr_queue);
>>> +            n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), 
>>> lr_queue);
>>> +            gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>>> +            list_del_init(&n->lr_queue);
>>>              set_bit(i, &this_cpu(lr_mask));
>>>          } else {
>>>              gic_inject_irq_stop();
>>> @@ -840,14 +859,6 @@ static void maintenance_interrupt(int irq, void 
>>> *dev_id, struct cpu_user_regs *r
>>>          spin_unlock_irq(&gic.lock);
>>>  
>>>          spin_lock_irq(&v->arch.vgic.lock);
>>> -        p = irq_to_pending(v, virq);
>>> -        if ( p->desc != NULL ) {
>>> -            p->desc->status &= ~IRQ_INPROGRESS;
>>> -            /* Assume only one pcpu needs to EOI the irq */
>>> -            cpu = p->desc->arch.eoi_cpu;
>>> -            eoi = 1;
>>> -            pirq = p->desc->irq;
>>> -        }
>>>          list_del_init(&p->inflight);
>>>          spin_unlock_irq(&v->arch.vgic.lock);
>>>  
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.