[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 06/39] ARM: evtchn: Handle level triggered IRQs correctly



Hi Stefano,

On 28/03/18 01:01, Stefano Stabellini wrote:
On Wed, 21 Mar 2018, Andre Przywara wrote:
The event channel IRQ has level triggered semantics, however the current
VGIC treats everything as edge triggered.
To correctly process those IRQs, we have to lower the (virtual) IRQ line
at some point in time, depending on whether ther interrupt condition
still prevails.
Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
line match its status, and call this function upon every hypervisor
entry.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxxxxx>
Reviewed-by: Julien Grall <julien.grall@xxxxxxx>
---
  xen/arch/arm/domain.c       | 7 +++++++
  xen/arch/arm/traps.c        | 1 +
  xen/include/asm-arm/event.h | 1 +
  3 files changed, 9 insertions(+)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ff97f2bc76..9688e62f78 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)
      vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
  }
+void vcpu_update_evtchn_irq(struct vcpu *v)
+{
+    bool pending = vcpu_info(v, evtchn_upcall_pending);
+
+    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
+}
+
  /* The ARM spec declares that even if local irqs are masked in
   * the CPSR register, an irq should wake up a cpu from WFI anyway.
   * For this reason we need to check for irqs that need delivery,
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 2638446693..5c18e918b0 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs 
*regs)
           * trap and how it can be optimised.
           */
          vtimer_update_irqs(current);
+        vcpu_update_evtchn_irq(current);
  #endif

I am replying to this patch, even though I have already committed it, to
point out a problem with the way we currently handle the evtchn_irq in
this series.

The short version is that I think we should configure the PPI
corresponding to the evtchn_irq as EDGE instead of LEVEL.

The long explanation follows, please correct me if I am wrong.

1) vcpuA/cpuA is running, it has already handled the event, cleared
evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
Xen yet. It is still in guest mode.

2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
vgic_inject_irq. However, because irq->line_level is high, it is not
injected.

3) vcpuA has to wait until trapping into Xen, calling
vcpu_update_evtchn_irq, and going back to guest mode before receiving
the event. This is theoretically a very long time.

Well, looking at the vGIC code the EOI bit in the LR is set for level interrupt. So as soon as vCPU A eoi it, we would enter to Xen and then call vcpu_update_evtchn_irq.

So I don't see how it could take a very long time.

Cheers,



Instead what should happen is:

1) vcpuA/cpuA is running, it has already handled the event, cleared
evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
Xen yet. It is still in guest mode.

2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
vgic_inject_irq, which calls vgic_queue_irq_unlock that
vcpu_kick(vcpuA), forcing it to take the event immediately.

Am I right? Wouldn't it be safer to continue configuring the evtchn_irq
as edge even in the new vgic?


--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.