[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] [XEN-4.2] hvmloader: Workaround uncached P2M mappings being created when relocating RAM
>>> On 19.02.15 at 14:01, <malcolm.crossley@xxxxxxxxxx> wrote: > Upstream commit 1c84d046735102e02d2df454ab07f14ac51f235d (XSA-60 fix) > Stopped tracking of setting/unsetting of uncached mode in an HVM guest. > (This was later fixed by the EPT misconfig changes in Xen 4.5.) > > On hosts which do not have forced IOMMU snooping then Xen allows for > uncached > P2M mappings to be created otherwise cache issues would occur. > > When hvmloader relocates RAM out of the HVM guests MMIO hole, Xen use's the > default MTRR setting to determine the cache setting for the destination RAM. > > hvmloader has not configured the default MTRR to enable write back caching > at > the point the RAM is relocated and so the destination RAM ends up being > configured as uncached. > > Before the XSA-60 patches, the RAM would have it's cachability setting > changed > to write back when the default MTRR was configured to write back. > > After XSA-60 patches, this does not happen and the RAM remains as uncached. > > This patch sets the default MTRR to be write back before relocating the RAM > and restores the default MTRR after relocating the RAM. > > The patch was designed to be be minimally invasive so it could be backported > easily. So why is this tagged XEN-4.2 then? The EPT misconfig handling got introduced with 4.5, i.e. if there was such a (general) problem, it ought to affect 4.3 and 4.4 as much. Of course, for the still maintained trees backporting the EPT misconfig handling would be an option. Tim and I agreed we wouldn't want to do this immediately, but considered doing it eventually. > @@ -197,25 +198,34 @@ void pci_setup(void) > ((pci_mem_start << 1) != 0) ) > pci_mem_start <<= 1; > > - /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */ > - while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend ) > + /* Workaround XENMEM_add_to_physmap creating uncached mappings > + * by setting default MTRR type to write back mapping */ > + if ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend ) > { > - struct xen_add_to_physmap xatp; > - unsigned int nr_pages = min_t( > - unsigned int, > - hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT), > - (1u << 16) - 1); > - if ( hvm_info->high_mem_pgend == 0 ) > - hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT); > - hvm_info->low_mem_pgend -= nr_pages; > - xatp.domid = DOMID_SELF; > - xatp.space = XENMAPSPACE_gmfn_range; > - xatp.idx = hvm_info->low_mem_pgend; > - xatp.gpfn = hvm_info->high_mem_pgend; > - xatp.size = nr_pages; > - if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 ) > - BUG(); > - hvm_info->high_mem_pgend += nr_pages; > + uint64_t mtrr_def_type = rdmsr(MSR_MTRRdefType); > + wrmsr(MSR_MTRRdefType, (1u << 11) | 6); > + /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */ > + while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend ) > + { > + struct xen_add_to_physmap xatp; > + unsigned int nr_pages = min_t( > + unsigned int, > + hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT), > + (1u << 16) - 1); > + if ( hvm_info->high_mem_pgend == 0 ) > + hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT); > + hvm_info->low_mem_pgend -= nr_pages; > + xatp.domid = DOMID_SELF; > + xatp.space = XENMAPSPACE_gmfn_range; > + xatp.idx = hvm_info->low_mem_pgend; > + xatp.gpfn = hvm_info->high_mem_pgend; > + xatp.size = nr_pages; > + if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 ) > + BUG(); > + hvm_info->high_mem_pgend += nr_pages; > + } > + /* Restore previous default MTRR value */ > + wrmsr(MSR_MTRRdefType, mtrr_def_type); So the previous if() has become a while(), but I can't see why, nor why the whole block needed its indentation changed (making it quite a bit more difficult to spot any eventual change in that code block). Also please make sure there are blank lines between declarations and statements. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |