|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/hvmloader: don't set xenpci MMIO BAR as UC in MTRR
On 30.05.2025 11:23, Roger Pau Monne wrote:
> The Xen PCI device (vendor ID 0x5853) exposed to x86 HVM guests doesn't
> have the functionality of a traditional PCI device. The exposed MIO BAR is
> used by some guests (including Linux) as a safe place to map foreign
> memory, including the grant table itself.
>
> Traditionally BARs from devices have the uncacheable (UC) cache attribute
> from the MTRR, to ensure correct functionality of such devices. hvmloader
> mimics this behaviour and sets the MTRR attributes of both the low and high
> PCI MMIO windows (where BARs of PCI devices reside) as UC in MTRR.
>
> This however causes performance issues for the users of the Xen PCI device
> BAR, as for the purposes of mapping remote memory there's no need to use
> the UC attribute. On Intel systems this is worked around by using iPAT,
> that allows the hypervisor to force the effective cache attribute of a p2m
> entry regardless of the guest PAT value. AMD however doesn't have an
> equivalent of iPAT, and guest PAT values are always considered.
>
> Linux commit:
>
> 41925b105e34 xen: replace xen_remap() with memremap()
>
> Attempted to mitigate this by forcing mappings of the grant-table to use
> the write-back (WB) cache attribute. However Linux memremap() takes MTRRs
> into account to calculate which PAT type to use, and seeing the MTRR cache
> attribute for the region being UC the PAT also ends up as UC, regardless of
> the caller having requested WB.
>
> As a workaround to allow current Linux to map the grant-table as WB using
> memremap() special case the Xen PCI device BAR in hvmloader and don't set
> its cache attribute as UC.
Can we (fully compatibly) make such a change? IOW do we know all possible
guests would be at least unaffected (ideally affected positively)? Imo ...
> Such workaround in hvmloader should also be
> paired with a fix for Linux so it attempts to change the MTRR of the Xen
> PCI device BAR to WB by itself.
>
> Overall, the long term solution would be to provide the guest with a safe
> range in the guest physical address space where mappings to foreign pages
> can be created.
... this is the only viable path.
> Some vif throughput performance figures provided by Anthoine from a 8
> vCPUs, 4GB of RAM HVM guest(s) running on AMD hardware:
>
> Without this patch:
> vm -> dom0: 1.1Gb/s
> vm -> vm: 5.0Gb/s
>
> With the patch:
> vm -> dom0: 4.5Gb/s
> vm -> vm: 7.0Gb/s
>
> Reported-by: Anthoine Bourgeois <anthoine.bourgeois@xxxxxxxxxx>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> I don't think the ACPI tables builder consume the PCI window size
> information, I'm not seeing any consumer of the acpi_info->pci_{min,len}
> fields, yet I've keep them covering the xenpci device BAR, hence the
> adjustment to hvmloader_acpi_build_tables() part of this patch.
acpi_build_tables() copies the field, and the comment ahead of struct
acpi_info clarifies where the uses are: It's the PLEN field, which does
have a use in dsdt.asl. Aiui the change you make is therefore a necessary
one.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |