[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/9] x86/IRQ: deal with move cleanup count state in fixup_irqs()
On Mon, Apr 29, 2019 at 05:23:20AM -0600, Jan Beulich wrote: > The cleanup IPI may get sent immediately before a CPU gets removed from > the online map. In such a case the IPI would get handled on the CPU > being offlined no earlier than in the interrupts disabled window after > fixup_irqs()' main loop. This is too late, however, because a possible > affinity change may incur the need for vector assignment, which will > fail when the IRQ's move cleanup count is still non-zero. > > To fix this > - record the set of CPUs the cleanup IPIs gets actually sent to alongside > setting their count, > - adjust the count in fixup_irqs(), accounting for all CPUs that the > cleanup IPI was sent to, but that are no longer online, > - bail early from the cleanup IPI handler when the CPU is no longer > online, to prevent double accounting. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Just as a note, this whole interrupt migration business seems extremely complex, and I wonder if Xen does really need it, or what's exactly it's performance gain compared to more simple solutions. I understand this is just fixes, but IMO it's making the logic even more complex. Maybe it would be simpler to have the interrupts hard-bound to pCPUs and instead have a soft-affinity on the guest vCPUs that are assigned as the destination? > --- > TBD: The proper recording of the IPI destinations actually makes the > move_cleanup_count field redundant. Do we want to drop it, at the > price of a few more CPU-mask operations? AFAICT this is not a hot path, so I would remove the move_cleanup_count field and just weight the cpu bitmap when needed. Thanks, Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |