[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen master] AMD/IOMMU: correct potentially-UB shifts



commit d029b9cf13875823532ee6e4201421dba16c81d4
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri May 20 12:21:10 2022 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri May 20 12:21:10 2022 +0200

    AMD/IOMMU: correct potentially-UB shifts
    
    Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify
    the target level for page table walks"]) have made Coverity notice a
    shift count in iommu_pde_from_dfn() which might in theory grow too
    large. While this isn't a problem in practice, address the concern
    nevertheless to not leave dangling breakage in case very large
    superpages would be enabled at some point.
    
    Coverity ID: 1504264
    
    While there also address a similar issue in set_iommu_ptes_present().
    It's not clear to me why Coverity hasn't spotted that one.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
 xen/drivers/passthrough/amd/iommu_map.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu_map.c 
b/xen/drivers/passthrough/amd/iommu_map.c
index 4a33df8c5e..963dcc7a4f 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_present(unsigned long 
pt_mfn,
                                            bool iw, bool ir)
 {
     union amd_iommu_pte *table, *pde;
-    unsigned int page_sz, flush_flags = 0;
+    unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1));
+    unsigned int flush_flags = 0;
 
     table = map_domain_page(_mfn(pt_mfn));
     pde = &table[pfn_to_pde_idx(dfn, pde_level)];
-    page_sz = 1U << (PTE_PER_TABLE_SHIFT * (pde_level - 1));
 
     if ( (void *)(pde + nr_ptes) > (void *)table + PAGE_SIZE )
     {
@@ -281,7 +281,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned 
long dfn,
         {
             unsigned long mfn, pfn;
 
-            pfn =  dfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
+            pfn = dfn & ~((1UL << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
             mfn = next_table_mfn;
 
             /* allocate lower level page table */
--
generated by git-patchbot for /home/xen/git/xen.git#master



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.