[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/HPET: channel handling in hpet_broadcast_resume()


  • To: "Jan Beulich" <jbeulich@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Teddy Astie" <teddy.astie@xxxxxxxxxx>
  • Date: Tue, 07 Apr 2026 14:28:33 +0000
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=mte1 header.d=mandrillapp.com header.i="@mandrillapp.com" header.h="From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding"; dkim=pass header.s=mte1 header.d=vates.tech header.i="teddy.astie@xxxxxxxxxx" header.h="From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:Date:MIME-Version:Content-Type:Content-Transfer-Encoding"
  • Cc: "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, "Roger Pau Monné" <roger.pau@xxxxxxxxxx>, "Marek Marczykowski" <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 07 Apr 2026 14:28:39 +0000
  • Feedback-id: 30504962:30504962.20260407:md
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Le 07/04/2026 à 15:35, Jan Beulich a écrit :
> The per-channel ENABLE bit is to solely be driven by hpet_enable_channel()
> and hpet_msi_{,un}mask(). It doesn't need setting immediately. Except for
> the (possible) channel put in legacy mode we don't do so during boot
> either.
>
> Instead reset ->arch.cpu_mask, to avoid msi_compose_msg() yielding an
> all-zero message (when the passed in CPU mask has no online CPUs). Nothing
> would later call msi_compose_msg() / hpet_msi_write(), and hence nothing
> would later produce a well-formed message template in
> hpet_events[].msi.msg.
>
> Fixes: 15aa6c67486c ("amd iommu: use base platform MSI implementation")
> Reported-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> Tested-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> ---
> As to the Fixes: tag: The issue for the HPET resume case is the
> cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) check in
> msi_compose_msg(). The earlier cpumask_empty() wasn't a problem, as
> cpu_mask_to_apicid() returning a bogus (offline) value didn't have any bad
> effect: Before use, a valid destination would have been put in place, but
> other parts of .msg were properly set up. Furthermore we also didn't clear
> the entire message prior to that change.
>
> Many thanks got to Marek for tirelessly trying out various debugging
> suggestions.
>
> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -685,12 +685,18 @@ void hpet_broadcast_resume(void)
>       for ( i = 0; i < n; i++ )
>       {
>           if ( hpet_events[i].msi.irq >= 0 )
> +        {
> +            struct irq_desc *desc = irq_to_desc(hpet_events[i].msi.irq);
> +
> +            cpumask_copy(desc->arch.cpu_mask, 
> cpumask_of(smp_processor_id()));
> +
>               __hpet_setup_msi_irq(irq_to_desc(hpet_events[i].msi.irq));

We can directly reuse "desc" here since irq_to_desc(...) isn't supposed
to change value with cpumask_copy().

i.e `__hpet_setup_msi_irq(desc);`

> +        }
>
>           /* set HPET Tn as oneshot */
>           cfg = hpet_read32(HPET_Tn_CFG(hpet_events[i].idx));
>           cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
> -        cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
> +        cfg |= HPET_TN_32BIT;
>           if ( !(hpet_events[i].flags & HPET_EVT_LEGACY) )
>               cfg |= HPET_TN_FSB;
>           hpet_write32(cfg, HPET_Tn_CFG(hpet_events[i].idx));
>

Teddy


--
Teddy Astie | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.