[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86: Allow write-combining on MMIO mappings again
From: David Woodhouse <dwmw@xxxxxxxxxx> Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce consistent cachability of MMIO mappings"), HVM guests have no longer been able to use PAT to obtain write-combining on MMIO because the 'ignore PAT' bit is set in EPT. For MMIO we shouldn't be setting the 'ignore PAT' bit, although we probably want to err on the side of caution and still do so for addresses in mmio_ro_ranges. That necessitates a slight refactoring to let the MMIO (for which mfn_vaiid() can be false) get through to the right code path. Since we're not bailing out for !mfn_valid() immediately, the range checks need to be adjusted to cope — simply by masking in the low bits to account for 'order' instead of adding, to avoid overflow when the mfn is INVALID_MFN (which happens on unmap, since we carefully call this function to fill in the EMT even though the PTE won't be valid). The range checks are also slightly refactored to put only one of them in the fast path in the common case. If it doesn't overlap, then it *definitely* isn't contained, so we don't need both checks. And if it overlaps and is only one page, then it definitely *is* contained. Finally, add a comment clarifying how that 'return -1' works — it isn't returning an error and causing the mapping to fail; it relies on resolve_misconfig() being able to split the mapping later. So it's *only* sane to do it where order>0 and the 'problem' will be solved by splitting the large page. Not for blindly returning 'error', which I was tempted to do in my first attempt. Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxx> --- xen/arch/x86/hvm/mtrr.c | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c index 709759c..41ae8b4 100644 --- a/xen/arch/x86/hvm/mtrr.c +++ b/xen/arch/x86/hvm/mtrr.c @@ -773,18 +773,20 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn, if ( v->domain != d ) v = d->vcpu ? d->vcpu[0] : NULL; - if ( !mfn_valid(mfn_x(mfn)) || - rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn), - mfn_x(mfn) + (1UL << order) - 1) ) + /* Mask, not add, for order so it works with INVALID_MFN on unmapping */ + if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn), + mfn_x(mfn) | ((1UL << order) - 1)) ) { - *ipat = 1; - return MTRR_TYPE_UNCACHABLE; + if ( !order || rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn), + mfn_x(mfn) | ((1UL << order) - 1)) ) + { + *ipat = 1; + return MTRR_TYPE_UNCACHABLE; + } + /* Force invalid memory type so resolve_misconfig() will split it */ + return -1; } - if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn), - mfn_x(mfn) + (1UL << order) - 1) ) - return -1; - if ( direct_mmio ) { if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> order ) @@ -795,6 +797,12 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn, return MTRR_TYPE_WRBACK; } + if ( !mfn_valid(mfn_x(mfn)) ) + { + *ipat = 1; + return MTRR_TYPE_UNCACHABLE; + } + if ( !need_iommu(d) && !cache_flush_permitted(d) ) { *ipat = 1; -- 2.7.4 Attachment:
smime.p7s _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |