[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch v6 02/11] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping



On June 01, 2016 6:05 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>> On 31.05.16 at 15:57, <quan.xu@xxxxxxxxx> wrote:
> > --- a/xen/arch/x86/mm/p2m-pt.c
> > +++ b/xen/arch/x86/mm/p2m-pt.c
> > @@ -673,6 +673,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >      if ( iommu_enabled && need_iommu(p2m->domain) &&
> >           (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
> >      {
> > +        int ret;
> > +
> >          if ( iommu_use_hap_pt(p2m->domain) )
> >          {
> >              if ( iommu_old_flags )
> > @@ -680,11 +682,27 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >          }
> >          else if ( iommu_pte_flags )
> >              for ( i = 0; i < (1UL << page_order); i++ )
> > -                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> > -                               iommu_pte_flags);
> > +            {
> > +                ret = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
> > +                                     iommu_pte_flags);
> > +                if ( unlikely(ret) )
> > +                {
> > +                    while ( i-- )
> > +                        iommu_unmap_page(p2m->domain, gfn + i);
> > +
> > +                    if ( !rc )
> > +                        rc = ret;
> > +
> > +                    break;
> > +                }
> > +            }
> 
> So why do you not use the same code structure here as you do on the EPT
> side? 

I try to modify these code in a slight way, but if you point out some extra 
issue, I am pleased to fix it.
Furthermore, I was not sure whether I made an arbitrary decision here to add 
the condition 'rc == 0' or not,
Even I was aware of that the 'rc' is zero when the code run here..

> I.e. do away with using "ret" altogether, moving it all into ...
> 
> >          else
> >              for ( i = 0; i < (1UL << page_order); i++ )
> > -                iommu_unmap_page(p2m->domain, gfn + i);
> > +            {
> > +                ret = iommu_unmap_page(p2m->domain, gfn + i);
> > +                if ( !rc )
> > +                    rc = ret;
> > +            }
> 
> ... this extremely narrow scope?
> 

What about this fix:

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -670,7 +670,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, 
mfn_t mfn,
          && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;

-    if ( iommu_enabled && need_iommu(p2m->domain) &&
+    if ( rc == 0 && iommu_enabled && need_iommu(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
     {
         if ( iommu_use_hap_pt(p2m->domain) )
@@ -678,13 +678,33 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long 
gfn, mfn_t mfn,
             if ( iommu_old_flags )
                 amd_iommu_flush_pages(p2m->domain, gfn, page_order);
         }
-        else if ( iommu_pte_flags )
-            for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
-                               iommu_pte_flags);
         else
-            for ( i = 0; i < (1UL << page_order); i++ )
-                iommu_unmap_page(p2m->domain, gfn + i);
+        {
+            if ( iommu_pte_flags )
+                for ( i = 0; i < (1UL << page_order); i++ )
+                {
+                    rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i,
+                                        iommu_pte_flags);
+                    if ( unlikely(rc) )
+                    {
+                        while ( i-- )
+                            iommu_unmap_page(p2m->domain, gfn + i);
+
+                        break;
+                    }
+                }
+            else
+            {
+                int ret;
+
+                for ( i = 0; i < (1UL << page_order); i++ )
+                {
+                    ret = iommu_unmap_page(p2m->domain, gfn + i);
+                    if ( !rc )
+                        rc = ret;
+                }
+            }
+        }
     }

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.