[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 01/15] x86/IRQ: deal with move-in-progress state in fixup_irqs()
On 03.07.2019 17:39, Andrew Cooper wrote: > On 17/05/2019 11:44, Jan Beulich wrote: >> The flag being set may prevent affinity changes, as these often imply >> assignment of a new vector. When there's no possible destination left >> for the IRQ, the clearing of the flag needs to happen right from >> fixup_irqs(). >> >> Additionally _assign_irq_vector() needs to avoid setting the flag when >> there's no online CPU left in what gets put into ->arch.old_cpu_mask. >> The old vector can be released right away in this case. > > This suggests that it is a bugfix, but it isn't clear what happens when > things go wrong. The vector cleanup wouldn't ever trigger, as the IRQ wouldn't get raised anymore to any of its prior target CPUs. Hence the immediate cleanup that gets done in that case. I thought the 2nd sentence would make this clear. If it doesn't, do you have a suggestion on how to improve the text? >> --- a/xen/arch/x86/irq.c >> +++ b/xen/arch/x86/irq.c >> @@ -2418,15 +2462,18 @@ void fixup_irqs(const cpumask_t *mask, b >> if ( desc->handler->enable ) >> desc->handler->enable(desc); >> >> + cpumask_copy(&affinity, desc->affinity); >> + >> spin_unlock(&desc->lock); >> >> if ( !verbose ) >> continue; >> >> - if ( break_affinity && set_affinity ) >> - printk("Broke affinity for irq %i\n", irq); >> - else if ( !set_affinity ) >> - printk("Cannot set affinity for irq %i\n", irq); >> + if ( !set_affinity ) >> + printk("Cannot set affinity for IRQ%u\n", irq); >> + else if ( break_affinity ) >> + printk("Broke affinity for IRQ%u, new: %*pb\n", >> + irq, nr_cpu_ids, &affinity); > > While I certainly prefer this version, I should point out that you > refused to accept my patches like this, and for consistency with the > rest of the codebase, you should be using cpumask_bits(). Oh, indeed. I guess I had converted a debugging only printk() into this one without noticing the necessary tidying, the more that elsewhere in the series I'm actually doing so already. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |