[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] x86: fix ordering of operations in destroy_irq()
commit 43427e65ccfea0c6dc0232f358287e6cc616b507 Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Fri May 31 08:35:22 2013 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Fri May 31 08:35:22 2013 +0200 x86: fix ordering of operations in destroy_irq() The fix for XSA-36, switching the default of vector map management to be per-device, exposed more readily a problem with the cleanup of these vector maps: dynamic_irq_cleanup() clearing desc->arch.used_vectors keeps the subsequently invoked clear_irq_vector() from clearing the bits for both the in-use and a possibly still outstanding old vector. Fix this by folding dynamic_irq_cleanup() into destroy_irq(), which was its only caller, deferring the clearing of the vector map pointer until after clear_irq_vector(). Once at it, also defer resetting of desc->handler until after the loop around smp_mb() checking for IRQ_INPROGRESS to be clear, fixing a (mostly theoretical) issue with the intercation with do_IRQ(): If we don't defer the pointer reset, do_IRQ() could, for non-guest IRQs, call ->ack() and ->end() with different ->handler pointers, potentially leading to an IRQ remaining un-acked. The issue is mostly theoretical because non-guest IRQs are subject to destroy_irq() only on (boot time) error paths. As to the changed locking: Invoking clear_irq_vector() with desc->lock held is okay because vector_lock already nests inside desc->lock (proven by set_desc_affinity(), which takes vector_lock and gets called from various desc->handler->ack implementations, getting invoked with desc->lock held). Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Keir Fraser <keir@xxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> --- xen/arch/x86/irq.c | 43 ++++++++++++++++++++----------------------- 1 files changed, 20 insertions(+), 23 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index b0b0c65..699221a 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -197,12 +197,24 @@ int create_irq(int node) return irq; } -static void dynamic_irq_cleanup(unsigned int irq) +void destroy_irq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq); unsigned long flags; struct irqaction *action; + BUG_ON(!MSI_IRQ(irq)); + + if ( dom0 ) + { + int err = irq_deny_access(dom0, irq); + + if ( err ) + printk(XENLOG_G_ERR + "Could not revoke Dom0 access to IRQ%u (error %d)\n", + irq, err); + } + spin_lock_irqsave(&desc->lock, flags); desc->status |= IRQ_DISABLED; desc->status &= ~IRQ_GUEST; @@ -210,16 +222,19 @@ static void dynamic_irq_cleanup(unsigned int irq) action = desc->action; desc->action = NULL; desc->msi_desc = NULL; - desc->handler = &no_irq_type; - desc->arch.used_vectors = NULL; cpumask_setall(desc->affinity); spin_unlock_irqrestore(&desc->lock, flags); /* Wait to make sure it's not being used on another CPU */ do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS ); - if (action) - xfree(action); + spin_lock_irqsave(&desc->lock, flags); + desc->handler = &no_irq_type; + clear_irq_vector(irq); + desc->arch.used_vectors = NULL; + spin_unlock_irqrestore(&desc->lock, flags); + + xfree(action); } static void __clear_irq_vector(int irq) @@ -286,24 +301,6 @@ void clear_irq_vector(int irq) spin_unlock_irqrestore(&vector_lock, flags); } -void destroy_irq(unsigned int irq) -{ - BUG_ON(!MSI_IRQ(irq)); - - if ( dom0 ) - { - int err = irq_deny_access(dom0, irq); - - if ( err ) - printk(XENLOG_G_ERR - "Could not revoke Dom0 access to IRQ%u (error %d)\n", - irq, err); - } - - dynamic_irq_cleanup(irq); - clear_irq_vector(irq); -} - int irq_to_vector(int irq) { int vector = -1; -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx http://lists.xensource.com/xen-changelog |
Lists.xenproject.org is hosted with RackSpace, monitoring our |