[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen stable-4.16] iommu/amd-vi: use correct level for quarantine domain page tables
commit 6fea3835d91aaa048f66036c34c25532e2dd2b67 Author: Roger Pau Monne <roger.pau@xxxxxxxxxx> AuthorDate: Wed Oct 11 13:14:21 2023 +0200 Commit: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CommitDate: Tue Oct 31 17:23:44 2023 +0000 iommu/amd-vi: use correct level for quarantine domain page tables The current setup of the quarantine page tables assumes that the quarantine domain (dom_io) has been initialized with an address width of DEFAULT_DOMAIN_ADDRESS_WIDTH (48). However dom_io being a PV domain gets the AMD-Vi IOMMU page tables levels based on the maximum (hot pluggable) RAM address, and hence on systems with no RAM above the 512GB mark only 3 page-table levels are configured in the IOMMU. On systems without RAM above the 512GB boundary amd_iommu_quarantine_init() will setup page tables for the scratch page with 4 levels, while the IOMMU will be configured to use 3 levels only. The page destined to be used as level 1, and to contain a directory of PTEs ends up being the address in a PTE itself, and thus level 1 page becomes the leaf page. Without the level mismatch it's level 0 page that should be the leaf page instead. The level 1 page won't be used as such, and hence it's not possible to use it to gain access to other memory on the system. However that page is not cleared in amd_iommu_quarantine_init() as part of re-initialization of the device quarantine page tables, and hence data on the level 1 page can be leaked between device usages. Fix this by making sure the paging levels setup by amd_iommu_quarantine_init() match the number configured on the IOMMUs. Note that IVMD regions are not affected by this issue, as those areas are mapped taking the configured paging levels into account. This is XSA-445 / CVE-2023-46835 Fixes: ea38867831da ('x86 / iommu: set up a scratch page in the quarantine domain') Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> (cherry picked from commit fe1e4668b373ec4c1e5602e75905a9fa8cc2be3f) --- xen/drivers/passthrough/amd/iommu_map.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index cf6f01b633..1b414a413b 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -654,9 +654,7 @@ static int fill_qpt(union amd_iommu_pte *this, unsigned int level, int amd_iommu_quarantine_init(struct pci_dev *pdev, bool scratch_page) { struct domain_iommu *hd = dom_iommu(dom_io); - unsigned long end_gfn = - 1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT); - unsigned int level = amd_iommu_get_paging_mode(end_gfn); + unsigned int level = hd->arch.amd.paging_mode; unsigned int req_id = get_dma_requestor_id(pdev->seg, pdev->sbdf.bdf); const struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(pdev->seg); int rc; -- generated by git-patchbot for /home/xen/git/xen.git#stable-4.16
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |