[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1b 1/9] x86/IRQ: deal with move-in-progress state in fixup_irqs()



>>> On 03.05.19 at 11:19, <roger.pau@xxxxxxxxxx> wrote:
> On Mon, Apr 29, 2019 at 09:40:14AM -0600, Jan Beulich wrote:
>> --- unstable.orig/xen/arch/x86/irq.c 
>> +++ unstable/xen/arch/x86/irq.c
>> @@ -242,6 +242,20 @@ void destroy_irq(unsigned int irq)
>>      xfree(action);
>>  }
>>  
>> +static void release_old_vec(struct irq_desc *desc)
>> +{
>> +    unsigned int vector = desc->arch.old_vector;
>> +
>> +    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
>> +    cpumask_clear(desc->arch.old_cpu_mask);
>> +
>> +    if ( desc->arch.used_vectors )
> 
> Wouldn't it be better to clean the bitmap when vector !=
> IRQ_VECTOR_UNASSIGNED?

No code path does / should call into here without the need to
actually release the previous vector.

> I haven't checked all the callers, but I don't think it's valid to
> call release_old_vec with desc->arch.old_vector ==
> IRQ_VECTOR_UNASSIGNED, in which case I would add an ASSERT.

Well, yes, I probably could. However, as much as I'm in
favor of ASSERT()s, I don't think it makes sense to ASSERT()
basically every bit of expected state. In the end there would
otherwise be more ASSERT()s than actual code.

>> +    {
>> +        ASSERT(test_bit(vector, desc->arch.used_vectors));
>> +        clear_bit(vector, desc->arch.used_vectors);
>> +    }
>> +}
>> +
>>  static void __clear_irq_vector(int irq)
>>  {
>>      int cpu, vector, old_vector;
>> @@ -285,14 +299,7 @@ static void __clear_irq_vector(int irq)
> 
> Kind of unrelated, but I think the check at the top of
> __clear_irq_vector should be:
> 
> BUG_ON(desc->arch.vector == IRQ_VECTOR_UNASSIGNED);
> 
> Rather than the current:
> 
> BUG_ON(!desc->arch.vector);
> 
> There's a lot of logic that would go extremely wrong if vector is -1.

Yes indeed. Do you want to send a patch, or should I add
one at the end of this series?

>>          per_cpu(vector_irq, cpu)[old_vector] = ~irq;
>>      }
>>  
>> -    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
>> -    cpumask_clear(desc->arch.old_cpu_mask);
>> -
>> -    if ( desc->arch.used_vectors )
>> -    {
>> -        ASSERT(test_bit(old_vector, desc->arch.used_vectors));
>> -        clear_bit(old_vector, desc->arch.used_vectors);
>> -    }
>> +    release_old_vec(desc);
>>  
>>      desc->arch.move_in_progress = 0;
> 
> While there it might be nice to convert move_in_progress to a boolean.

This would grow the patch quite a bit I think, so I prefer no to.

>> @@ -2409,15 +2446,18 @@ void fixup_irqs(const cpumask_t *mask, b
>>          if ( desc->handler->enable )
>>              desc->handler->enable(desc);
>>  
>> +        cpumask_copy(&affinity, desc->affinity);
>> +
>>          spin_unlock(&desc->lock);
>>  
>>          if ( !verbose )
>>              continue;
>>  
>> -        if ( break_affinity && set_affinity )
>> -            printk("Broke affinity for irq %i\n", irq);
>> -        else if ( !set_affinity )
>> -            printk("Cannot set affinity for irq %i\n", irq);
>> +        if ( !set_affinity )
>> +            printk("Cannot set affinity for IRQ%u\n", irq);
>> +        else if ( break_affinity )
>> +            printk("Broke affinity for IRQ%u, new: %*pb\n",
>> +                   irq, nr_cpu_ids, &affinity);
> 
> I guess it's fine to have those without rate-limiting because
> fixup_irqs is only called for admin-triggered actions, so there's no
> risk of console flooding.

Right, plus I'd rather not hide any of these messages: Them
being there was already a good indication that something
_might_ be going wrong. If we got to the point where we're
fully confident in the code, then we could think about lowering
their log level, or rate limiting them.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.