[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen stable-4.16] x86/IOMMU: move tracking in iommu_identity_mapping()



commit 10d6cba5989d8acb94ae293f22eb434d85f735cb
Author:     Teddy Astie <teddy.astie@xxxxxxxxxx>
AuthorDate: Tue Aug 13 16:52:06 2024 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Aug 13 16:52:06 2024 +0200

    x86/IOMMU: move tracking in iommu_identity_mapping()
    
    If for some reason xmalloc() fails after having mapped the reserved
    regions, an error is reported, but the regions remain mapped in the P2M.
    
    Similarly if an error occurs during set_identity_p2m_entry() (except on
    the first call), the partial mappings of the region would be retained
    without being tracked anywhere, and hence without there being a way to
    remove them again from the domain's P2M.
    
    Move the setting up of the list entry ahead of trying to map the region.
    In cases other than the first mapping failing, keep record of the full
    region, such that a subsequent unmapping request can be properly torn
    down.
    
    To compensate for the potentially excess unmapping requests, don't log a
    warning from p2m_remove_identity_entry() when there really was nothing
    mapped at a given GFN.
    
    This is XSA-460 / CVE-2024-31145.
    
    Fixes: 2201b67b9128 ("VT-d: improve RMRR region handling")
    Fixes: c0e19d7c6c42 ("IOMMU: generalize VT-d's tracking of mapped RMRR 
regions")
    Signed-off-by: Teddy Astie <teddy.astie@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    master commit: beadd68b5490ada053d72f8a9ce6fd696d626596
    master date: 2024-08-13 16:36:40 +0200
---
 xen/arch/x86/mm/p2m.c               |  8 +++++---
 xen/drivers/passthrough/x86/iommu.c | 30 +++++++++++++++++++++---------
 2 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index ddd2f861c3..c4c653d7fe 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1618,9 +1618,11 @@ int clear_identity_p2m_entry(struct domain *d, unsigned 
long gfn_l)
     else
     {
         gfn_unlock(p2m, gfn, 0);
-        printk(XENLOG_G_WARNING
-               "non-identity map d%d:%lx not cleared (mapped to %lx)\n",
-               d->domain_id, gfn_l, mfn_x(mfn));
+        if ( (p2mt != p2m_invalid && p2mt != p2m_mmio_dm) ||
+             a != p2m_access_n || !mfn_eq(mfn, INVALID_MFN) )
+           printk(XENLOG_G_WARNING
+                  "non-identity map %pd:%lx not cleared (mapped to %lx)\n",
+                  d, gfn_l, mfn_x(mfn));
         ret = 0;
     }
 
diff --git a/xen/drivers/passthrough/x86/iommu.c 
b/xen/drivers/passthrough/x86/iommu.c
index dc9936e169..b9a50f6ea9 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -238,24 +238,36 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t 
p2ma,
     if ( p2ma == p2m_access_x )
         return -ENOENT;
 
-    while ( base_pfn < end_pfn )
-    {
-        int err = set_identity_p2m_entry(d, base_pfn, p2ma, flag);
-
-        if ( err )
-            return err;
-        base_pfn++;
-    }
-
     map = xmalloc(struct identity_map);
     if ( !map )
         return -ENOMEM;
+
     map->base = base;
     map->end = end;
     map->access = p2ma;
     map->count = 1;
+
+    /*
+     * Insert into list ahead of mapping, so the range can be found when
+     * trying to clean up.
+     */
     list_add_tail(&map->list, &hd->arch.identity_maps);
 
+    for ( ; base_pfn < end_pfn; ++base_pfn )
+    {
+        int err = set_identity_p2m_entry(d, base_pfn, p2ma, flag);
+
+        if ( !err )
+            continue;
+
+        if ( (map->base >> PAGE_SHIFT_4K) == base_pfn )
+        {
+            list_del(&map->list);
+            xfree(map);
+        }
+        return err;
+    }
+
     return 0;
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.