[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] xen/arm: gic: Ensure we have an ISB between ack and do_IRQ()



On 30/11/2018 19:52, Andrii Anisov wrote:
> Hello Andre,
> 
> Please see my comments below:
> 
> On 23.11.18 14:18, Andre Przywara wrote:
>> Fundamentally there is a semantic difference between edge and level
>> triggered IRQs: When the guest has handled an *edge* IRQ (EOIed so
>> the LR's state goes to 0), this is done and dusted, and Xen doesn't
>> need to care about this anymore until the next IRQ occurs.> For level
>> triggered IRQs, even though the guest has handled it, we need to
>> resample the (potentially virtual) IRQ line, as it may come up or
>> down at the *device*'s discretion: the interrupt reason might have
>> gone away (GPIO condition no longer true), even before we were able
>> to inject it, or there might be another interrupt reason not yet
>> handled (incoming serial character while serving a transmit
>> interrupt). Also typically it's up to the interrupt handler to
>> confirm handling the interrupt, either explicitly by clearing an
>> interrupt bit in some status register or implicitly, for instance by
>> draining a FIFO, say on a serial device. So even though from the
>> (V)GIC's point of view the interrupt has been processed (EOIed), it
>> might still be pending.
> So, as I understand the intended behavior of a vGIC for the level
> interrupt is following cases:
> 1. in case the interrupt line is still active from HW side, but
>    interrupt handler from VM EOIs the interrupt, it should
>    be signaled to vCPU by vGIC again

yes

> 2. in case a peripheral deactivated interrupt line, but VM did not
>    activated it yet, this interrupt should be removed from pending for
>    VM

yes

> IMO, case 1 is indirectly supported by old vgic. For HW interrupts its
> pretty natural: deactivation by VM in VGIC leads to deactivation in
> GIC, so the interrupt priority is restored and GIC will trap PCPU to
> reinsert it. This will be seen by VM as immediate IRQ trap after EOI.

Yes, this is true for hardware interrupts, and this lets the old VGIC
get away with it.
Virtual devices with level interrupt semantics are a different story,
the SBSA UART has some hacks in it to support it properly.

> Also Case 2 is not implemented in the old vgic. It is somehow
> supported by new vgic, though it also relies on the trap to the
> hypervisor to commit the update to LRs.

Yes, and there is not so much we can do about it. But that's not a real
problem, as you have this problem in bare metal, too.

> But it's rather a problem of
> GIC arch/implementation, which does not signal CPU nor updates
> associated LR on level interrupt deassertion.
> 
>> My intimate "old Xen VGIC" knowledge has been swapped out from my
>> brain meanwhile, but IIRC Xen treats every IRQ as if it would be an
>> edge IRQ. Which works if the guest's interrupt handler behaves
>> correctly. Most IRQ handlers have a loop to iterate over all possible
>> interrupt reasons and process them, so the line goes indeed down
>> before they EOI the IRQ.
> 
> I've spent some time to look through the new vgic implementation and I
> have a note about it:
> It's not clear why are you probing level interrupts on guest->hyp
> transition. While it targets case 2 described above, it seems to be
> more relevant to probe the level interrupts right before hyp->guest
> transition. Because vcpu might be descheduled and while it hangs on
> scheduler queues interrupt line level has more chances to be changed
> by peripheral itself.
> Also I'm pretty scared of new vgic locking scheme with per-irq locks
> and locking logic i.e. in vgic_queue_irq_unlock() function.

Well, you should be scared of the old VGIC locking scheme instead ;-)
Apart from the vgic_queue_irq_unlock() function, the rest of the new
locking scheme is much clearer. And it scales much better, as we have
per-IRQ locks, which should virtually never be contended, and per-CPU
locks, which are very rarely contended only, as it's mostly only taken
by its own VCPU during entry and exit. There is no per-domain lock for
the emulation anymore.
I see that it's tempting to have an "easy" locking scheme, but that
typically is either one-lock-to-rule-them-all, which doesn't scale, or
doesn't cover corner cases.
You should always prefer correctness over performance, otherwise you
just fail faster ;-)

> Also sorting
> list in vgic_flush_lr_state() with vgic_irq_cmp() looks very
> expensive.

Yes, but effectively this virtually never happens, as you have rarely
more than four pending IRQs at the same time.
I had patches lying around to improve this part, just never got around
to post, especially with only little rationale.
If you are interested, I can dig them out, though I am not sure how
relevant this is.

> But, for sure all that stuff performance should be properly evaluated
> and measured.

Indeed. Can you hack something into Xen to get some statistics on those
cases? I am not sure if Xen has something to trace lock contention, but
you could easily add some counters to track how many LRs we actually
use, to see if this is actually a problem in your case.
I think PERFCOUNTER is your friend.

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.