[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/HPET: channel handling in hpet_broadcast_resume()


  • To: Teddy Astie <teddy.astie@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 7 Apr 2026 17:28:06 +0200
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:From:Content-Language:References:Cc:To:Subject:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Marek Marczykowski <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 07 Apr 2026 15:28:05 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 07.04.2026 16:28, Teddy Astie wrote:
> Le 07/04/2026 à 15:35, Jan Beulich a écrit :
>> The per-channel ENABLE bit is to solely be driven by hpet_enable_channel()
>> and hpet_msi_{,un}mask(). It doesn't need setting immediately. Except for
>> the (possible) channel put in legacy mode we don't do so during boot
>> either.
>>
>> Instead reset ->arch.cpu_mask, to avoid msi_compose_msg() yielding an
>> all-zero message (when the passed in CPU mask has no online CPUs). Nothing
>> would later call msi_compose_msg() / hpet_msi_write(), and hence nothing
>> would later produce a well-formed message template in
>> hpet_events[].msi.msg.
>>
>> Fixes: 15aa6c67486c ("amd iommu: use base platform MSI implementation")
>> Reported-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> Tested-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
>> ---
>> As to the Fixes: tag: The issue for the HPET resume case is the
>> cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) check in
>> msi_compose_msg(). The earlier cpumask_empty() wasn't a problem, as
>> cpu_mask_to_apicid() returning a bogus (offline) value didn't have any bad
>> effect: Before use, a valid destination would have been put in place, but
>> other parts of .msg were properly set up. Furthermore we also didn't clear
>> the entire message prior to that change.
>>
>> Many thanks got to Marek for tirelessly trying out various debugging
>> suggestions.
>>
>> --- a/xen/arch/x86/hpet.c
>> +++ b/xen/arch/x86/hpet.c
>> @@ -685,12 +685,18 @@ void hpet_broadcast_resume(void)
>>       for ( i = 0; i < n; i++ )
>>       {
>>           if ( hpet_events[i].msi.irq >= 0 )
>> +        {
>> +            struct irq_desc *desc = irq_to_desc(hpet_events[i].msi.irq);
>> +
>> +            cpumask_copy(desc->arch.cpu_mask, 
>> cpumask_of(smp_processor_id()));
>> +
>>               __hpet_setup_msi_irq(irq_to_desc(hpet_events[i].msi.irq));
> 
> We can directly reuse "desc" here since irq_to_desc(...) isn't supposed 
> to change value with cpumask_copy().
> 
> i.e `__hpet_setup_msi_irq(desc);`

Oh, indeed - how did I not spot this?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.