[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: IOMMU faults after S3


  • To: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 2 Apr 2026 10:39:41 +0200
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:From:Content-Language:References:Cc:To:Subject:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 02 Apr 2026 08:39:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02.04.2026 10:08, Marek Marczykowski-Górecki wrote:
> The xl dmesg output (from MTL this time):
> 
>     (XEN) [  123.477511] Entering ACPI S3 state.
>     (XEN) [18446743903.571842] _disable_pit_irq:2649: using_pit: 0, 
> cpu_has_apic: 1
>     (XEN) [18446743903.571856] _disable_pit_irq:2659: 
> cpuidle_using_deep_cstate: 1, boot_cpu_has(X86_FEATURE_XEN_ARAT): 0

XEN_ARAT being off is the one odd aspect here. That'll want tracking down
separately. As per xen-cpuid output (below) ARAT is available.

>     (XEN) [18446743903.571866] _disable_pit_irq:2662: init: 0
>     (XEN) [18446743903.571877] hpet_broadcast_resume:661: hpet_events: 
> ffff83046bc1f080
>     (XEN) [18446743903.572020] hpet_broadcast_resume:672: num_hpets_used: 8
>     (XEN) [18446743903.572029] hpet_broadcast_resume:690: cfg: 0x1
>     (XEN) [18446743903.572040] hpet_broadcast_resume:695: i:0, 
> hpet_events[i].msi.irq: 122, hpet_events[i].flags: 0
>     (XEN) [18446743903.572081] hpet_broadcast_resume:706: i:0, cfg: 0xc134
>     (XEN) [18446743903.572089] hpet_broadcast_resume:695: i:1, 
> hpet_events[i].msi.irq: 123, hpet_events[i].flags: 0
>     (XEN) [18446743903.572123] hpet_broadcast_resume:706: i:1, cfg: 0xc104
>     (XEN) [18446743903.572132] hpet_broadcast_resume:695: i:2, 
> hpet_events[i].msi.irq: 124, hpet_events[i].flags: 0
>     (XEN) [18446743903.572167] hpet_broadcast_resume:706: i:2, cfg: 0xc104
>     (XEN) [18446743903.572175] hpet_broadcast_resume:695: i:3, 
> hpet_events[i].msi.irq: 125, hpet_events[i].flags: 0
>     (XEN) [18446743903.572210] hpet_broadcast_resume:706: i:3, cfg: 0xc104
>     (XEN) [18446743903.572218] hpet_broadcast_resume:695: i:4, 
> hpet_events[i].msi.irq: 126, hpet_events[i].flags: 0
>     (XEN) [18446743903.572252] hpet_broadcast_resume:706: i:4, cfg: 0xc104
>     (XEN) [18446743903.572261] hpet_broadcast_resume:695: i:5, 
> hpet_events[i].msi.irq: 127, hpet_events[i].flags: 0
>     (XEN) [18446743903.572294] hpet_broadcast_resume:706: i:5, cfg: 0xc104
>     (XEN) [18446743903.572303] hpet_broadcast_resume:695: i:6, 
> hpet_events[i].msi.irq: 128, hpet_events[i].flags: 0
>     (XEN) [18446743903.572338] hpet_broadcast_resume:706: i:6, cfg: 0xc104
>     (XEN) [18446743903.572347] hpet_broadcast_resume:695: i:7, 
> hpet_events[i].msi.irq: 129, hpet_events[i].flags: 0
>     (XEN) [18446743903.572382] hpet_broadcast_resume:706: i:7, cfg: 0xc104

Hmm, but what you didn't log is whether __hpet_setup_msi_irq() actually
succeeded everywhere. (And if it did, also logging HPET_Tn_ROUTE() values
might be a good idea, if only to double check.)

All values logged look entirely plausible, with XEN_ARAT being off.

> And the xen-cpuid -p output from this system:
> 
>     Xen reports there are maximum 120 leaves and 2 MSRs
>     Raw policy: 48 leaves, 2 MSRs
>      CPUID:
>       leaf     subleaf  -> eax      ebx      ecx      edx     
>       00000000:ffffffff -> 00000023:756e6547:6c65746e:49656e69
>       00000001:ffffffff -> 000a06a4:20800800:77fafbff:bfebfbff
>       00000002:ffffffff -> 00feff01:000000f0:00000000:00000000
>       00000004:00000000 -> fc004121:02c0003f:0000003f:00000000
>       00000004:00000001 -> fc004122:03c0003f:0000003f:00000000
>       00000004:00000002 -> fc01c143:03c0003f:000007ff:00000000
>       00000004:00000003 -> fc0fc163:02c0003f:00007fff:00000004
>       00000005:ffffffff -> 00000040:00000040:00000003:11112020
>       00000006:ffffffff -> 00dfcff7:00000002:00000409:00040003
>       00000007:00000000 -> 00000002:239c27eb:994007ac:fc18c410
>       00000007:00000001 -> 40400910:00000001:00000000:00040000
>       00000007:00000002 -> 00000000:00000000:00000000:0000003f
>       0000000a:ffffffff -> 07300805:00000000:00000007:00008603
>       0000000b:00000000 -> 00000001:00000002:00000100:00000020
>       0000000b:00000001 -> 00000007:00000016:00000201:00000020
>       0000000d:00000000 -> 00000207:00000000:00000a88:00000000
>       0000000d:00000001 -> 0000000f:00000000:00019900:00000000
>       0000000d:00000002 -> 00000100:00000240:00000000:00000000
>       0000000d:00000008 -> 00000080:00000000:00000001:00000000
>       0000000d:00000009 -> 00000008:00000a80:00000000:00000000
>       0000000d:0000000b -> 00000010:00000000:00000001:00000000
>       0000000d:0000000c -> 00000018:00000000:00000001:00000000
>       0000000d:0000000f -> 00000328:00000000:00000001:00000000
>       0000000d:00000010 -> 00000008:00000000:00000001:00000000
>       80000000:ffffffff -> 80000008:00000000:00000000:00000000
>       80000001:ffffffff -> 00000000:00000000:00000121:2c100800
>       80000002:ffffffff -> 65746e49:2952286c:726f4320:4d542865
>       80000003:ffffffff -> 6c552029:20617274:35312037:00004835
>       80000006:ffffffff -> 00000000:00000000:08007040:00000000
>       80000007:ffffffff -> 00000000:00000000:00000000:00000100
>       80000008:ffffffff -> 0000302e:00000000:00000000:00000000
>      MSRs:
>       index    -> value           
>       000000ce -> 0000000080000000
>       0000010a -> 000000000d89fd6b
>     Host policy: 41 leaves, 2 MSRs
>      CPUID:
>       leaf     subleaf  -> eax      ebx      ecx      edx     
>       00000000:ffffffff -> 0000000d:756e6547:6c65746e:49656e69
>       00000001:ffffffff -> 000a06a4:20800800:77fafbff:bfebfbff
>       00000002:ffffffff -> 00feff01:000000f0:00000000:00000000
>       00000004:00000000 -> fc004121:02c0003f:0000003f:00000000
>       00000004:00000001 -> fc004122:03c0003f:0000003f:00000000
>       00000004:00000002 -> fc01c143:03c0003f:000007ff:00000000
>       00000004:00000003 -> fc0fc163:02c0003f:00007fff:00000004
>       00000005:ffffffff -> 00000040:00000040:00000003:11112020
>       00000006:ffffffff -> 00dfcff7:00000002:00000409:00040003

Still ARAT available as per here.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.