[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen staging] AMD/IOMMU: have callers specify the target level for page table walks



commit 5fafa6cf529a6c0cd0b12c920a2cc68a3cca99e1
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri Apr 22 14:51:37 2022 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Apr 22 14:51:37 2022 +0200

    AMD/IOMMU: have callers specify the target level for page table walks
    
    In order to be able to insert/remove super-pages we need to allow
    callers of the walking function to specify at which point to stop the
    walk. (For now at least gcc will instantiate just a variant of the
    function with the parameter eliminated, so effectively no change to
    generated code as far as the parameter addition goes.)
    
    Instead of merely adjusting a BUG_ON() condition, convert it into an
    error return - there's no reason to crash the entire host in that case.
    Leave an assertion though for spotting issues early in debug builds.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
 xen/drivers/passthrough/amd/iommu_map.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c 
b/xen/drivers/passthrough/amd/iommu_map.c
index 6d42bcea0e..8bef46e045 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -239,7 +239,8 @@ void __init iommu_dte_add_device_entry(struct amd_iommu_dte 
*dte,
  * page tables.
  */
 static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
-                              unsigned long *pt_mfn, bool map)
+                              unsigned int target, unsigned long *pt_mfn,
+                              bool map)
 {
     union amd_iommu_pte *pde, *next_table_vaddr;
     unsigned long  next_table_mfn;
@@ -250,7 +251,11 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned 
long dfn,
     table = hd->arch.amd.root_table;
     level = hd->arch.amd.paging_mode;
 
-    BUG_ON( table == NULL || level < 1 || level > 6 );
+    if ( !table || target < 1 || level < target || level > 6 )
+    {
+        ASSERT_UNREACHABLE();
+        return 1;
+    }
 
     /*
      * A frame number past what the current page tables can represent can't
@@ -261,7 +266,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned 
long dfn,
 
     next_table_mfn = mfn_x(page_to_mfn(table));
 
-    while ( level > 1 )
+    while ( level > target )
     {
         unsigned int next_level = level - 1;
 
@@ -332,7 +337,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned 
long dfn,
         level--;
     }
 
-    /* mfn of level 1 page table */
+    /* mfn of target level page table */
     *pt_mfn = next_table_mfn;
     return 0;
 }
@@ -369,7 +374,7 @@ int cf_check amd_iommu_map_page(
         return rc;
     }
 
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), &pt_mfn, true) || !pt_mfn )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), 1, &pt_mfn, true) || !pt_mfn )
     {
         spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_ERROR("invalid IO pagetable entry dfn = %"PRI_dfn"\n",
@@ -402,7 +407,7 @@ int cf_check amd_iommu_unmap_page(
         return 0;
     }
 
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), &pt_mfn, false) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), 1, &pt_mfn, false) )
     {
         spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_ERROR("invalid IO pagetable entry dfn = %"PRI_dfn"\n",
--
generated by git-patchbot for /home/xen/git/xen.git#staging



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.