[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCHv4 0/8] x86/xen: fixes for mapping high MMIO regions (and remove _PAGE_IOMAP)



[ x86 maintainers, this is predominately a Xen series but the end
result is the _PAGE_IOMAP PTE flag is removed. See patch #8. ]

This a fix for the problems with mapping high MMIO regions in certain
cases (e.g., the RDMA drivers) as not all mappers were specifing
_PAGE_IOMAP which meant no valid MFN could be found and the resulting
PTEs would be set as not present, causing subsequent faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
region after the end of the E820 map and the region beyond the end of
the p2m.  Ballooned frames are still marked as missing in the p2m as
before.

As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
translations will now do the right thing for all I/O regions.  This
means the Xen-specific _PAGE_IOMAP can be removed,

This series has been tested (in dom0) on all unique machines we have
in out test lab (~100 machines), some of which have PCI devices with
BARs above the end of RAM.

Note this does not fix a 32-bit dom0 trying to access BARs above 16 TB
as this is a caused by MFNs/PFNs being limited to 32-bits (unsigned
long).

You may find it useful to apply patch #3 to more easily review the
updated p2m diagram.

Changes in v4:
- fix p2m_mid_identity initialization.

Changes in v3 (not posted):
- use correct end of e820
- fix xen_remap_domain_mfn_range()

Changes in v2:
- fix to actually set end-of-RAM to 512 GiB region as 1:1.
- introduce p2m_mid_identity to efficiently store large 1:1 regions.
- Split the _PAGE_IOMAP patch into Xen and generic x86 halves.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.