[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/5] xen/arm: Only enable physical IRQs when the guest asks

On Tue, 2013-06-25 at 18:38 +0100, Julien Grall wrote:
> >> @@ -719,11 +731,18 @@ int gic_route_irq_to_guest(struct domain *d, const 
> >> struct dt_irq *irq,
> >>      unsigned long flags;
> >>      int retval;
> >>      bool_t level;
> >> +    struct pending_irq *p;
> >> +    /* XXX: handler other VCPU than 0 */
> > 
> > That should be something like "XXX: handle VCPUs other than 0".
> > 
> > This only matters if we can route SGIs or PPIs to the guest though I
> > think, since they are the only banked interrupts? For SPIs we actually
> > want to actively avoid doing this multiple times, don't we?
> Yes. Here the VCPU is only used to retrieved the struct pending_irq.

Which is per-CPU for PPIs and SGIs. Do we not care about PPIs here?

> > 
> > For the banked interrupts I think we just need a loop here, or for
> > p->desc to not be part of the pending_irq struct but actually part of
> > some separate per-domain datastructure, since it would be very weird to
> > have a domain where the PPIs differed between CPUs. (I'm not sure if
> > that is allowed by the hardware, I bet it is, but it would be a
> > pathological case IMHO...).
> > I think a perdomain irq_desc * array is probably the right answer,
> > unless someone can convincingly argue that PPI routing differing between
> > VCPUs in a guest is a useful thing...
> Until now, I didn't see PPIs on other devices than the arch timers and
> the GIC. I don't know if it's possible, but pending_irq are also banked
> for PPIs, so it's not an issue.
> The issue is how do we link the physical PPI to the virtual PPI? Is a
> 1:1 mapping. How does Xen handle PPI when a it is coming on VCPUs which
> doesn't handle it (for instance a domU)?

How do you mean?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.