[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 38/49] ARM: new VGIC: handle hardware mapped IRQs



Hi,

On 02/26/2018 05:19 PM, Andre Przywara wrote:
Hi,

On 26/02/18 16:57, Julien Grall wrote:


On 02/26/2018 04:48 PM, Andre Przywara wrote:
Hi,

On 23/02/18 18:14, Julien Grall wrote:


On 23/02/18 18:02, Andre Przywara wrote:
Hi,

Hi Andre,

On 19/02/18 12:19, Julien Grall wrote:
Hi,

On 09/02/18 14:39, Andre Przywara wrote:
The VGIC supports virtual IRQs to be connected to a hardware IRQ, so
when a guest EOIs the virtual interrupt, it affects the state of that
corresponding interrupt on the hardware side at the same time.
Implement the interface that the Xen arch/core code expects to
connect
the virtual and the physical world.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxxxxx>
---
     xen/arch/arm/vgic/vgic.c | 63
++++++++++++++++++++++++++++++++++++++++++++++++
     1 file changed, 63 insertions(+)

diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
index dc5e011fa3..8d5260a7db 100644
--- a/xen/arch/arm/vgic/vgic.c
+++ b/xen/arch/arm/vgic/vgic.c
@@ -693,6 +693,69 @@ void vgic_kick_vcpus(struct domain *d)
         }
     }
     +struct irq_desc *vgic_get_hw_irq_desc(struct domain *d,
struct vcpu
*v,
+                                      unsigned int virq)
+{
+    struct irq_desc *desc = NULL;
+    struct vgic_irq *irq = vgic_get_irq(d, v, virq);
+    unsigned long flags;
+
+    if ( !irq )
+        return NULL;
+
+    spin_lock_irqsave(&irq->irq_lock, flags);
+    if ( irq->hw )
+        desc = irq_to_desc(irq->hwintid);

This is not going to work well for PPIs. We should consider to add at
least an ASSERT(...) in the code to prevent bad use of it.

Yeah, done. But I wonder if we eventually should extend the
irq_to_desc() function to take the vCPU, since we will need it anyway
once we use hardware mapped timer IRQs (PPIs) in the future. But this
should not be in this series, I guess.

irq_to_desc only deal with hardware interrupt, so you mean pCPU instead
of vCPU?

Yes, indeed. But I think this points to the problem of this approach:
the virtual IRQ is tied to a VCPU, and we have to make sure that not
only the affinity is updated on a CPU migration (as we do for SPIs), but
actually the interrupt itself is changed: since CPU0/PPI9 has a
different irq_desc* from, say, CPU1/PPI9.
So there is more than just adding a parameter to irq_to_desc().

Change in the irq_to_desc() interface needs to be justify. The use case
I have in mind for PPI is the virtual timer. In that case, you will
always receive the PPI on the right pCPU.

Yes, but the connection between virtual and physical IRQ is realised as
a connection between struct pending_irq/vgic_irq and struct irq_desc.
For an SPI this is always the same irq_desc, regardless of the affinity
or running CPU. But for PPIs you would need to change the actual
irq_desc pointer when changing the affinity. Not really rocket science
(though it may become nasty with the locking), but needs to be implemented.

I don't think it is a big deal. You would remove the "link" when saving the vCPU state and add the "link" when restoring. In both case, you would be on the right pCPU. Have a look at:

https://lists.xenproject.org/archives/html/xen-devel/2015-11/msg00925.html

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.