[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problems in PV dom0 on recent x86 hardware



On 08.07.24 10:32, Andrew Cooper wrote:
On 08/07/2024 9:15 am, Jürgen Groß wrote:
I've got an internal report about failures in dom0 when booting with
Xen on a Thinkpad P14s Gen 3 AMD (kernel 6.9).

With some debugging I've found that the UCSI driver seems to fail to
map MFN feec2 as iomem, as the hypervisor is denying this mapping due
to being part of the MSI space. The mapping attempt seems to be the
result of an ACPI call of the UCSI driver:

[   44.575345] RIP: e030:xen_mc_flush+0x1e8/0x2b0
[   44.575418]  xen_leave_lazy_mmu+0x15/0x60
[   44.575425]  vmap_range_noflush+0x408/0x6f0
[   44.575438]  __ioremap_caller+0x20d/0x350
[   44.575450]  acpi_os_map_iomem+0x1a3/0x1c0
[   44.575454]  acpi_ex_system_memory_space_handler+0x229/0x3f0
[   44.575464]  acpi_ev_address_space_dispatch+0x17e/0x4c0
[   44.575474]  acpi_ex_access_region+0x28a/0x510
[   44.575479]  acpi_ex_field_datum_io+0x95/0x5c0
[   44.575482]  acpi_ex_extract_from_field+0x36b/0x4e0
[   44.575490]  acpi_ex_read_data_from_field+0xcb/0x430
[   44.575493]  acpi_ex_resolve_node_to_value+0x2e0/0x530
[   44.575496]  acpi_ex_resolve_to_value+0x1e7/0x550
[   44.575499]  acpi_ds_evaluate_name_path+0x107/0x170
[   44.575505]  acpi_ds_exec_end_op+0x392/0x860
[   44.575508]  acpi_ps_parse_loop+0x268/0xa30
[   44.575515]  acpi_ps_parse_aml+0x221/0x5e0
[   44.575518]  acpi_ps_execute_method+0x171/0x3e0
[   44.575522]  acpi_ns_evaluate+0x174/0x5d0
[   44.575525]  acpi_evaluate_object+0x167/0x440
[   44.575529]  acpi_evaluate_dsm+0xb6/0x130
[   44.575541]  ucsi_acpi_dsm+0x53/0x80
[   44.575546]  ucsi_acpi_read+0x2e/0x60
[   44.575550]  ucsi_register+0x24/0xa0
[   44.575555]  ucsi_acpi_probe+0x162/0x1e3
[   44.575559]  platform_probe+0x48/0x90
[   44.575567]  really_probe+0xde/0x340
[   44.575579]  __driver_probe_device+0x78/0x110
[   44.575581]  driver_probe_device+0x1f/0x90
[   44.575584]  __driver_attach+0xd2/0x1c0
[   44.575587]  bus_for_each_dev+0x77/0xc0
[   44.575590]  bus_add_driver+0x112/0x1f0
[   44.575593]  driver_register+0x72/0xd0
[   44.575600]  do_one_initcall+0x48/0x300
[   44.575607]  do_init_module+0x60/0x220
[   44.575615]  __do_sys_init_module+0x17f/0x1b0
[   44.575623]  do_syscall_64+0x82/0x170
[   44.575685] 1 of 1 multicall(s) failed: cpu 4
[   44.575695]   call  1: op=1 result=-1
caller=xen_extend_mmu_update+0x4e/0xd0 pars=ffff888267e25ad0 1 0 7ff0
args=9ba37a678 80000000feec2073

The pte value of the mmu_update call is 80000000feec2073, which is
rejected by
the hypervisor with -EPERM.

Before diving deep into the UCSI internals, is it possible that the
hypervisor
needs some update (IOW: could it be the mapping attempt should rather be
honored, as there might be an I/O resources at this position which
dom0 needs
to access for using the related hardware?)


It's only MSI space for external accesses.  For CPU accesses its other
things, notably the LAPIC MMIO window.

Do we know what this range is supposed to be for?  I do find it
surprising for a USB BAR to be here.

I have requested more information from a bare metal boot, especially
/proc/iomem and output of lspci -v.


Juergen




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.