[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/9] x86/IRQ: deal with move-in-progress state in fixup_irqs()



>>> On 29.04.19 at 13:22, <JBeulich@xxxxxxxx> wrote:
> RFC: I've seen the new ASSERT() in irq_move_cleanup_interrupt() trigger.
>      I'm pretty sure that this assertion triggering means something else
>      is wrong, and has been even prior to this change (adding the
>      assertion without any of the other changes here should be valid in
>      my understanding).

So I think what is missing is updating of vector_irq ...

> @@ -2391,6 +2401,24 @@ void fixup_irqs(const cpumask_t *mask, b
>              continue;
>          }
>  
> +        /*
> +         * In order for the affinity adjustment below to be successful, we
> +         * need __assign_irq_vector() to succeed. This in particular means
> +         * clearing desc->arch.move_in_progress if this would otherwise
> +         * prevent the function from succeeding. Since there's no way for the
> +         * flag to get cleared anymore when there's no possible destination
> +         * left (the only possibility then would be the IRQs enabled window
> +         * after this loop), there's then also no race with us doing it here.
> +         *
> +         * Therefore the logic here and there need to remain in sync.
> +         */
> +        if ( desc->arch.move_in_progress &&
> +             !cpumask_intersects(mask, desc->arch.cpu_mask) )
> +        {
> +            release_old_vec(desc);
> +            desc->arch.move_in_progress = 0;
> +        }

... here and in the somewhat similar logic patch 2 inserts a few lines
up. I'm about to try this out, but given how rarely I've seen the
problem this will take a while to feel confident (if, of course, it helps
in the first place).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.