|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH for-4.21 03/10] x86/HPET: use single, global, low-priority vector for broadcast IRQ
On Mon, Oct 20, 2025 at 06:05:04PM +0200, Jan Beulich wrote:
> On 20.10.2025 17:49, Roger Pau Monné wrote:
> > On Mon, Oct 20, 2025 at 07:53:51AM +0200, Jan Beulich wrote:
> >> On 17.10.2025 10:20, Roger Pau Monné wrote:
> >>> On Fri, Oct 17, 2025 at 09:15:08AM +0200, Jan Beulich wrote:
> >>>> On 16.10.2025 18:27, Roger Pau Monné wrote:
> >>>>> On Thu, Oct 16, 2025 at 09:32:04AM +0200, Jan Beulich wrote:
> >>>>>> @@ -497,6 +503,7 @@ static void set_channel_irq_affinity(str
> >>>>>> spin_lock(&desc->lock);
> >>>>>> hpet_msi_mask(desc);
> >>>>>> hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
> >>>>>> + per_cpu(vector_irq, ch->cpu)[HPET_BROADCAST_VECTOR] = ch->msi.irq;
> >>>>>
> >>>>> I would set the vector table ahead of setting the affinity, in case we
> >>>>> can drop the mask calls around this block of code.
> >>>>
> >>>> Isn't there a problematic window either way round? I can make the change,
> >>>> but I don't see that addressing anything. The new comparator value will
> >>>> be written later anyway, and interrupts up to that point aren't of any
> >>>> interest anyway. I.e. it doesn't matter which of the CPUs gets to handle
> >>>> them.
> >>>
> >>> It's preferable to get a silent stray interrupt (if the per-cpu vector
> >>> table is correctly setup), rather than to get a message from Xen that
> >>> an unknown vector has been received?
> >>>
> >>> If a vector is injected ahead of vector_irq being set Xen would
> >>> complain in do_IRQ() that that's no handler for such vector.
> >>
> >> As of now, setup_vector_irq() makes sure the field isn't uninitialized
> >> (i.e. left at INT_MIN). With that change dropped (see below), there
> >> would indeed be such a risk (on the first instance on each CPU).
> >>
> >>>>>> --- a/xen/arch/x86/irq.c
> >>>>>> +++ b/xen/arch/x86/irq.c
> >>>>>> @@ -755,8 +755,9 @@ void setup_vector_irq(unsigned int cpu)
> >>>>>> if ( !irq_desc_initialized(desc) )
> >>>>>> continue;
> >>>>>> vector = irq_to_vector(irq);
> >>>>>> - if ( vector >= FIRST_HIPRIORITY_VECTOR &&
> >>>>>> - vector <= LAST_HIPRIORITY_VECTOR )
> >>>>>> + if ( vector <= (vector >= FIRST_HIPRIORITY_VECTOR
> >>>>>> + ? LAST_HIPRIORITY_VECTOR
> >>>>>> + : LAST_LOPRIORITY_VECTOR) )
> >>>>>> cpumask_set_cpu(cpu, desc->arch.cpu_mask);
> >>>>>
> >>>>> I think this is wrong. The low priority vector used by the HPET will
> >>>>> only target a single CPU at a time, and hence adding extra CPUs to
> >>>>> that mask as part of AP bringup is not correct.
> >>>>
> >>>> I'm not sure about "wrong". It's not strictly necessary for the HPET one,
> >>>> I expect, but it's generally what would be necessary. For the HPET one,
> >>>> hpet_msi_set_affinity() replaces the value anyway. (I can add a sentence
> >>>> to this effect to the description, if that helps.)
> >>>
> >>> I do think it's wrong, it's just not harmful per-se apart from showing
> >>> up in the output of dump_irqs(). The value in desc->arch.cpu_mask
> >>> should be the CPU that's the destination of the interrupt. In this
> >>> case, the HPET interrupt does have a single destination at a give
> >>> time, and adding another one will make the output of dump_irqs() show
> >>> two destinations, when the interrupt will target a single interrupt.
> >>
> >> Just that as soon as the interrupt is actually in use, what is done
> >> here doesn't matter anymore.
> >>
> >> I continue to think the change is correct for the general case: I'd
> >> expect these special vectors to normally (just not here) be used as
> >> "direct APIC vectors", in which case the IRQ does have multiple
> >> destinations.
> >
> > I think it depends on the usage of the vector. There are indeed
> > vectors that are active on all CPUs at the same time (like the current
> > hi priority ones). However in the case of the HPET vector that's not
> > the case, it targets a single CPU specifically.
> >
> > I think it would be best if vectors that are used on all CPUs at the
> > same time are initialized using cpumask_all or cpumask_setall(), and
> > avoid having to add a new bit every time a CPU is started. It's fine
> > for cpu_mask to contain offline CPUs.
>
> I don't think so. There may be less dependencies now, but look at e.g.
> the check in _bind_irq_vector(). Or this loop
>
> for_each_cpu(cpu, desc->arch.cpu_mask)
> per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
>
> in _assign_irq_vector() (that may be fine because of how the mask is
> set just before the loop, but the loop itself very much assumes no
> offline CPUs in there). The most problematic example may be in
> fixup_irqs(), where cpumask_any(desc->arch.cpu_mask) is used.
Then it looks like the comment ahead of the field declaration in irq.h
is wrong:
/*
* Except for high priority interrupts @cpu_mask may have bits set for
* offline CPUs. Consumers need to be careful to mask this down to
* online ones as necessary. There is supposed to always be a non-
* empty intersection with cpu_online_map.
*/
I realize now the comment says "Except for high priority", but we
don't seem to make a such differentiation in most of the code (like
fixup_irqs()).
Hopefully this will be way more simple if I can get rid of the
cpumasks in arch_irq_desc.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |