[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC 05/19] xen/arm: Release IRQ routed to a domain when it's destroying
On Wed, 18 Jun 2014, Julien Grall wrote: > On 18/06/14 19:08, Stefano Stabellini wrote: > > > +/* The guest may not have EOIed the IRQ. > > > + * Be sure to reset correctly the IRQ. > > > + */ > > > +void gic_reset_guest_irq(struct irq_desc *desc) > > > +{ > > > + ASSERT(spin_is_locked(&desc->lock)); > > > + ASSERT(desc->status & IRQ_GUEST); > > > + > > > + if ( desc->status & IRQ_INPROGRESS ) > > > + GICC[GICC_DIR] = desc->irq; > > > +} > > > > You should call gic_update_one_lr first, then check IRQ_INPROGRESS. > > You should also call gic_remove_from_queues, remove the irq from the > > inflight queue and clear the GIC_IRQ_GUEST_* status bits. > > Are you sure? This function is only called when the domain is dying, so the > guest is already unscheduled. Therefore gic_update_one_lr won't work. > > I can add an ASSERT(irq_get_domain(desc)->is_dying) here... The ASSERT is a good idea. Given that the domain has been descheduled, gic_update_one_lr won't work but you can read the saved lr (pending_irq->lr) from v->arch.gic_lr. You can obtain the target vcpu calling vgic_get_target_vcpu. You only need to write to GICC_DIR if (gic_lr & (GICH_LR_ACTIVE|GICH_LR_PENDING)). gic_remove_from_queues should still work. Also I wonder if you need to call gic_reset_guest_irq before desc->handler->shutdown. The specification states (4.3.5): 'Disabling an interrupt only disables the forwarding of the interrupt from the Distributor to any CPU interface. It does not prevent the interrupt from changing state, for example becoming pending, or active and pending if it is already active.' So from the text above I think that EOIing an interrupt that has been disabled at the GICD level should work, but it is not 100% clear. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |