[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [PATCH] IRQ: fix incorrect logic in __clear_irq_vector
On 12/08/11 14:10, Andrew Cooper wrote: > In the old code, tmp_mask is the cpu_and of cfg->cpu_mask and > cpu_online_map. However, in the usual case of moving an IRQ from one > PCPU to another because the scheduler decides its a good idea, > cfg->cpu_mask and cfg->old_cpu_mask do not intersect. This causes the > old cpu vector_irq table to keep the irq reference when it shouldn't. > > This leads to a resource leak if a domain is shut down wile an irq has > a move pending, which results in Xen's create_irq() eventually failing > with -ENOSPC when all vector_irq tables are full of stale references. > > Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > > diff -r 1f08b380d438 -r fa051d11b3de xen/arch/x86/irq.c > --- a/xen/arch/x86/irq.c Wed Aug 10 14:43:34 2011 +0100 > +++ b/xen/arch/x86/irq.c Fri Aug 12 14:09:52 2011 +0100 > @@ -216,7 +216,7 @@ static void __clear_irq_vector(int irq) > > if (likely(!cfg->move_in_progress)) > return; > - for_each_cpu_mask(cpu, tmp_mask) { > + for_each_cpu_mask(cpu, cfg->old_cpu_mask) { > for (vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_DYNAMIC_VECTOR; > vector++) { > if (per_cpu(vector_irq, cpu)[vector] != irq) Apologies for the previous spam of this patch - I failed somewhat with patchbomb. 2 things come to mind. 1) This affects all versions of Xen since per-cpu idts were introduces, so is a candidate for backporting to all relevant trees. 2) What would the tradeoff be with adding a "u8 old_vector" to irq_cfg? It would increase the size of the cfg structure but would avoid several pieces of code which loop through all dynamic vectors and check if the irq vector matches? -- Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer T: +44 (0)1223 225 900, http://www.citrix.com _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |