[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: IOMMU faults after S3


  • To: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 7 Apr 2026 12:23:16 +0200
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:Content-Language:References:Cc:To:From:Subject:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 07 Apr 2026 10:23:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 07.04.2026 08:29, Jan Beulich wrote:
> On 03.04.2026 01:06, Marek Marczykowski-Górecki wrote:
>> On Thu, Apr 02, 2026 at 04:53:31PM +0200, Jan Beulich wrote:
>>> Sadly you now log the low halves of HPET_Tn_ROUTE twice, while you don't log
>>> the high halves at all.
>>
>> I was missing hpet_read32 there...
>>
>> Updated:
>> (XEN) [  116.921573] Entering ACPI S3 state.
>> (XEN) [18446743895.088893] _disable_pit_irq:2649: using_pit: 0, 
>> cpu_has_apic: 1
>> (XEN) [18446743895.088907] _disable_pit_irq:2659: cpuidle_using_deep_cstate: 
>> 1, boot_cpu_has(X86_FEATURE_XEN_ARAT): 0
>> (XEN) [18446743895.088918] _disable_pit_irq:2662: init: 0
>> (XEN) [18446743895.088928] hpet_broadcast_resume:662: hpet_events: 
>> ffff83046bc1f080
>> (XEN) [18446743895.089072] hpet_broadcast_resume:673: num_hpets_used: 8
>> (XEN) [18446743895.089081] hpet_broadcast_resume:691: cfg: 0x1
>> (XEN) [18446743895.089092] hpet_broadcast_resume:696: i:0, 
>> hpet_events[i].msi.irq: 122, hpet_events[i].flags: 0
>> (XEN) [18446743895.089122] hpet_msi_write:286: iommu_update_ire_from_msi rc: >> 0
>> (XEN) [18446743895.089132] hpet_broadcast_resume:700: i:0, 
>> __hpet_setup_msi_irq ret: 0
>> (XEN) [18446743895.089168] hpet_broadcast_resume:710: i:0, cfg: 0xc134, 
>> hpet_read32(HPET_Tn_ROUTE(hpet_events[i].idx)): 0, 
>> hpet_read32(HPET_Tn_ROUTE(hpet_events[i].idx) + 4): 0xf18
> 
> Okay, this would appear to clarify that the address really isn't correct. Yet 
> I'm
> confused now by the low half values: In your earlier log there was
> 
> hpet_broadcast_resume:710: i:0, cfg: 0xc134, 
> HPET_Tn_ROUTE(hpet_events[i].idx): 0x110
> 
> and alike, i.e. clearly a non-zero value. Now all low halves are zero. I'll 
> try
> to figure how the logged values here could result, but consistent data (or an
> explantation for the apparent inconsistency) would help.

Could you give the patch below a try?

Jan

x86/HPET: channel handling in hpet_broadcast_resume()

The per-channel ENABLE bit is to solely be driven by hpet_enable_channel()
and hpet_msi_{,un}mask(). It doesn't need setting immediately. Except for
the (possible) channel put in legacy mode we don't do so during boot
either.

Instead reset ->arch.cpu_mask, to avoid msi_compose_msg() yielding an
all-zero message (when the passed in CPU mask has no online CPUs). Nothing
would later call msi_compose_msg() / hpet_msi_write(), and hence nothing
would later produce a well-formed message template in
hpet_events[].msi.msg.

Fixes: 15aa6c67486c ("amd iommu: use base platform MSI implementation")
Reported-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
As to the Fixes: tag: The issue for the HPET resume case is the
cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) check in
msi_compose_msg(). The earlier cpumask_empty() wasn't a problem, as
cpu_mask_to_apicid() returning a bogus (offline) value didn't have any bad
effect: Before use, a valid destination would have been put in place, but
other parts of .msg were properly set up. Furthermore we also didn't clear
the entire message prior to that change.

--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -685,12 +685,18 @@ void hpet_broadcast_resume(void)
     for ( i = 0; i < n; i++ )
     {
         if ( hpet_events[i].msi.irq >= 0 )
+        {
+            struct irq_desc *desc = irq_to_desc(hpet_events[i].msi.irq);
+
+            cpumask_copy(desc->arch.cpu_mask, cpumask_of(smp_processor_id()));
+
             __hpet_setup_msi_irq(irq_to_desc(hpet_events[i].msi.irq));
+        }
 
         /* set HPET Tn as oneshot */
         cfg = hpet_read32(HPET_Tn_CFG(hpet_events[i].idx));
         cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
-        cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
+        cfg |= HPET_TN_32BIT;
         if ( !(hpet_events[i].flags & HPET_EVT_LEGACY) )
             cfg |= HPET_TN_FSB;
         hpet_write32(cfg, HPET_Tn_CFG(hpet_events[i].idx));




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.