[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen stable-4.15] iommu/amd-vi: use correct level for quarantine domain page tables



commit 1f5f515da0f694d97939f346c628a3b7b612d165
Author:     Roger Pau Monne <roger.pau@xxxxxxxxxx>
AuthorDate: Wed Oct 11 13:14:21 2023 +0200
Commit:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CommitDate: Tue Oct 31 17:35:21 2023 +0000

    iommu/amd-vi: use correct level for quarantine domain page tables
    
    The current setup of the quarantine page tables assumes that the quarantine
    domain (dom_io) has been initialized with an address width of
    DEFAULT_DOMAIN_ADDRESS_WIDTH (48).
    
    However dom_io being a PV domain gets the AMD-Vi IOMMU page tables levels 
based
    on the maximum (hot pluggable) RAM address, and hence on systems with no RAM
    above the 512GB mark only 3 page-table levels are configured in the IOMMU.
    
    On systems without RAM above the 512GB boundary amd_iommu_quarantine_init()
    will setup page tables for the scratch page with 4 levels, while the IOMMU 
will
    be configured to use 3 levels only.  The page destined to be used as level 
1,
    and to contain a directory of PTEs ends up being the address in a PTE 
itself,
    and thus level 1 page becomes the leaf page.  Without the level mismatch 
it's
    level 0 page that should be the leaf page instead.
    
    The level 1 page won't be used as such, and hence it's not possible to use 
it
    to gain access to other memory on the system.  However that page is not 
cleared
    in amd_iommu_quarantine_init() as part of re-initialization of the device
    quarantine page tables, and hence data on the level 1 page can be leaked
    between device usages.
    
    Fix this by making sure the paging levels setup by 
amd_iommu_quarantine_init()
    match the number configured on the IOMMUs.
    
    Note that IVMD regions are not affected by this issue, as those areas are
    mapped taking the configured paging levels into account.
    
    This is XSA-445 / CVE-2023-46835
    
    Fixes: ea38867831da ('x86 / iommu: set up a scratch page in the quarantine 
domain')
    Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    (cherry picked from commit fe1e4668b373ec4c1e5602e75905a9fa8cc2be3f)
---
 xen/drivers/passthrough/amd/iommu_map.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c 
b/xen/drivers/passthrough/amd/iommu_map.c
index b4c1824491..3473db4c1e 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -584,9 +584,7 @@ static int fill_qpt(union amd_iommu_pte *this, unsigned int 
level,
 int amd_iommu_quarantine_init(struct pci_dev *pdev)
 {
     struct domain_iommu *hd = dom_iommu(dom_io);
-    unsigned long end_gfn =
-        1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT);
-    unsigned int level = amd_iommu_get_paging_mode(end_gfn);
+    unsigned int level = hd->arch.amd.paging_mode;
     unsigned int req_id = get_dma_requestor_id(pdev->seg, pdev->sbdf.bdf);
     const struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(pdev->seg);
     int rc;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.