[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v8 09/13] xen/arm: second irq injection while the first irq is still inflight
On Thu, 22 May 2014, Julien Grall wrote: > On 22/05/14 18:39, Stefano Stabellini wrote: > > On Thu, 22 May 2014, Julien Grall wrote: > > > > while the first one is still active. > > > > If the first irq is already pending (not active), clear > > > > GIC_IRQ_GUEST_QUEUED because the guest doesn't need a second > > > > notification.If the irq has already been EOI'ed then just clear the > > > > GICH_LR right away and move the interrupt to lr_pending so that it is > > > > going to be reinjected by gic_restore_pending_irqs on return to guest. > > > > > > > > If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_QUEUED > > > > and send an SGI. The target cpu is going to be interrupted and call > > > > gic_clear_lrs, that is going to take the same actions. > > > > > > > > Do not call vgic_vcpu_inject_irq from gic_inject if > > > > evtchn_upcall_pending is set. If we remove that call, we don't need to > > > > special case evtchn_irq in vgic_vcpu_inject_irq anymore. > > > > We need to force the first injection of evtchn_irq (call > > > > gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending > > > > is already set by common code on vcpu creation. > > > > > > If you only need it for the first time. Why can't you call vgic_inject_irq > > > with the IRQ evtchn when the VCPU is turn on? > > > > > > This would remove every hack with this IRQ in the GIC code. > > > > In principle sounds nice, but in practice it is difficult and risks > > being racy. In vgic_vcpu_inject_irq we have: > > > > /* vcpu offline */ > > if ( test_bit(_VPF_down, &v->pause_flags) ) > > { > > spin_unlock_irqrestore(&v->arch.vgic.lock, flags); > > return; > > } > > > > So we can only inject the irq once the vcpu is properly up, that is > > certainly later than vcpu_initialise. > > If we call vcpu_vgic_inject right before vcpu_wake (the _VPF_down flags has > been cleared) we won't have any race condition. > > This can be done in both arch/arm/vpsci.c and common/domain.c (VCPUOP_up). > > It may require an arch specific function. Smth like arch_vcpu_prepare_up. The following change works: diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 33141e3..2a8456f 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -644,6 +644,8 @@ int arch_set_info_guest( else set_bit(_VPF_down, &v->pause_flags); + vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq); + return 0; } diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index af5cd6c..d597f63 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1087,6 +1087,8 @@ int construct_dom0(struct domain *d) } #endif + vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq); + for ( i = 1, cpu = 0; i < d->max_vcpus; i++ ) { cpu = cpumask_cycle(cpu, &cpu_online_map); diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 4869b87..2f86de1 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -404,17 +404,10 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) irq = i + (32 * n); p = irq_to_pending(v, irq); set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); - if ( irq == v->domain->arch.evtchn_irq && - vcpu_info(current, evtchn_upcall_pending) && - list_empty(&p->inflight) ) - vgic_vcpu_inject_irq(v, irq); - else { - unsigned long flags; - spin_lock_irqsave(&v->arch.vgic.lock, flags); - if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) - gic_raise_guest_irq(v, irq, p->priority); - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); - } + spin_lock_irqsave(&v->arch.vgic.lock, flags); + if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) + gic_raise_guest_irq(v, irq, p->priority); + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); if ( p->desc != NULL ) { spin_lock_irqsave(&p->desc->lock, flags); _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |