[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1b 1/9] x86/IRQ: deal with move-in-progress state in fixup_irqs()
On Mon, May 06, 2019 at 01:15:59AM -0600, Jan Beulich wrote: > >>> On 03.05.19 at 16:10, <JBeulich@xxxxxxxx> wrote: > >>>> On 03.05.19 at 11:19, <roger.pau@xxxxxxxxxx> wrote: > >> On Mon, Apr 29, 2019 at 09:40:14AM -0600, Jan Beulich wrote: > >>> --- unstable.orig/xen/arch/x86/irq.c > >>> +++ unstable/xen/arch/x86/irq.c > >>> @@ -242,6 +242,20 @@ void destroy_irq(unsigned int irq) > >>> xfree(action); > >>> } > >>> > >>> +static void release_old_vec(struct irq_desc *desc) > >>> +{ > >>> + unsigned int vector = desc->arch.old_vector; > >>> + > >>> + desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED; > >>> + cpumask_clear(desc->arch.old_cpu_mask); > >>> + > >>> + if ( desc->arch.used_vectors ) > >> > >> Wouldn't it be better to clean the bitmap when vector != > >> IRQ_VECTOR_UNASSIGNED? > > > > No code path does / should call into here without the need to > > actually release the previous vector. > > > >> I haven't checked all the callers, but I don't think it's valid to > >> call release_old_vec with desc->arch.old_vector == > >> IRQ_VECTOR_UNASSIGNED, in which case I would add an ASSERT. > > > > Well, yes, I probably could. However, as much as I'm in > > favor of ASSERT()s, I don't think it makes sense to ASSERT() > > basically every bit of expected state. In the end there would > > otherwise be more ASSERT()s than actual code. > > Actually, upon second thought - let me add this, but then in an > even more strict form: Certain very low and very high numbered > vectors are illegal here as well, and we may then be able to use > the same validation helper elsewhere (in particular also for the > check that you've found to be wrong in _clear_irq_vector()). Thanks, that LGTM. Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |