|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 4/4] xen/arm: physical irq follow virtual irq
On Fri, 2014-06-06 at 18:48 +0100, Stefano Stabellini wrote:
> Migrate physical irqs to the same physical cpu that is running the vcpu
> expected to receive the irqs. That is done when enabling irqs, when the
> guest writes to GICD_ITARGETSR and when Xen migrates a vcpu to a
> different pcpu.
>
> Introduce a new hook in common code to call the vgic irq migration code
> as evtchn_move_pirqs only deals with event channels at the moment.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> CC: JBeulich@xxxxxxxx
> ---
> xen/arch/arm/gic.c | 18 ++++++++++++++++--
> xen/arch/arm/vgic.c | 31 +++++++++++++++++++++++++++++++
> xen/common/event_channel.c | 4 ++++
> xen/include/asm-arm/gic.h | 1 +
> 4 files changed, 52 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 6f24b14..43bef21 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -192,9 +192,23 @@ static void gic_guest_irq_end(struct irq_desc *desc)
> /* Deactivation happens in maintenance interrupt / via GICV */
> }
>
> -static void gic_irq_set_affinity(struct irq_desc *desc, const cpumask_t
> *mask)
> +static void gic_irq_set_affinity(struct irq_desc *desc, const cpumask_t
> *cpu_mask)
> {
> - BUG();
> + volatile unsigned char *bytereg;
> + unsigned int mask;
> +
> + if ( desc == NULL || cpumask_empty(cpu_mask) )
> + return;
> +
> + spin_lock(&gic.lock);
What does this lock actually protect against here? I think the only
thing which is externally visible is the write to bytereg and all the
inputs are deterministic I think. Perhaps a suitable barrier would
suffice? Or perhaps the caller ought to be holding the lock for some
other reason already.
> +
> + mask = gic_cpu_mask(cpu_mask);
> +
> + /* Set target CPU mask (RAZ/WI on uniprocessor) */
> + bytereg = (unsigned char *) (GICD + GICD_ITARGETSR);
> + bytereg[desc->irq] = mask;
> +
> + spin_unlock(&gic.lock);
> }
>
> /* XXX different for level vs edge */
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 54d3676..a90d9cb 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -408,6 +408,32 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v,
> unsigned int irq)
> return v_target;
> }
>
> +void vgic_move_irqs(struct vcpu *v)
> +{
> + const cpumask_t *cpu_mask = cpumask_of(v->processor);
> + struct domain *d = v->domain;
> + struct pending_irq *p;
> + int i, j, k;
> +
> + for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ )
> + {
> + for ( j = 0 ; j < 8 ; j++ )
> + {
> + for ( k = 0; k < 4; k++ )
> + {
> + uint8_t target =
> byte_read(d->arch.vgic.shared_irqs[i].itargets[j], 0, k);
i,j,k are pretty opaque here (and I wrote that before I saw the
machinations going on in the irq_to_pending call!).
Please just iterate over irqs and use the rank accessor functions to get
to the correct itargets to read. Which might be clearest with the
irq_to_rank helper I proposed on an earlier patch.
> + target = find_next_bit((const unsigned long *) &target, 8,
> 0);
This is find_first_bit, I think.
It's just occurred to me that many of the
i = 0
while (i = find_enxt_bit() )
loops I've been seeing in this series might be better following a
for (i=find_first_bit; ; i=find_next_bit(...))
pattern.
> + if ( target == v->vcpu_id )
> + {
> + p = irq_to_pending(v, 32 * (i + 1) + (j * 4) + k);
> + if ( p->desc != NULL )
> + p->desc->handler->set_affinity(p->desc, cpu_mask);
A helper for this chain of indirections might be nice.
irq_set_affinity(desc, cpu_mask) of something.
> + }
> + }
> + }
> + }
> +}
> +
> static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> {
> const unsigned long mask = r;
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index 6853842..226321d 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1319,6 +1319,10 @@ void evtchn_move_pirqs(struct vcpu *v)
> unsigned int port;
> struct evtchn *chn;
>
> +#ifdef CONFIG_ARM
> + vgic_move_irqs(v);
Indent and/or hard vs soft tab.
And this should be arch_move_irqs I think, perhaps with a stub on x86
instead of an ifdef.
> +#endif
> +
> spin_lock(&d->event_lock);
> for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
> {
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |