[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping



On May 04, 2016 4:41 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>> On 04.05.16 at 03:45, <kevin.tian@xxxxxxxxx> wrote:
> >>  From: Xu, Quan
> >> Sent: Friday, April 29, 2016 5:25 PM
> >> --- a/xen/arch/x86/mm.c
> >> +++ b/xen/arch/x86/mm.c
> >> @@ -2467,7 +2467,7 @@ static int __get_page_type(struct page_info
> >> *page, unsigned long type,
> >>                             int preemptible)  {
> >>      unsigned long nx, x, y = page->u.inuse.type_info;
> >> -    int rc = 0;
> >> +    int rc = 0, ret = 0;
> >>
> >>      ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
> >>
> >> @@ -2578,11 +2578,11 @@ static int __get_page_type(struct page_info
> >> *page, unsigned long type,
> >>          if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
> >>          {
> >>              if ( (x & PGT_type_mask) == PGT_writable_page )
> >> -                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
> >> +                ret = iommu_unmap_page(d, mfn_to_gmfn(d,
> >> + page_to_mfn(page)));
> >>              else if ( type == PGT_writable_page )
> >> -                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> >> -                               page_to_mfn(page),
> >> -                               IOMMUF_readable|IOMMUF_writable);
> >> +                ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> >> +                                     page_to_mfn(page),
> >> +
> >> + IOMMUF_readable|IOMMUF_writable);
> >>          }
> >>      }
> >>
> >> @@ -2599,6 +2599,9 @@ static int __get_page_type(struct page_info
> >> *page, unsigned long type,
> >>      if ( (x & PGT_partial) && !(nx & PGT_partial) )
> >>          put_page(page);
> >>
> >> +    if ( !rc )
> >> +        rc = ret;
> >> +
> >>      return rc;
> >>  }
> >
> > I know there were quite some discussions before around above change
> > (sorry I didn't remember all of them). Just based on mental picture we
> > should return error where it firstly occurs. However above change
> > looks favoring errors in later "rc = alloc_page_type" over earlier
> > iommu_map/unmap_page error. Is it what we want?
> 
> Yes, as that's the primary operation here.
> 
> > If there is a reason that we cannot return immediately upon
> > iommu_map/unmap,
> 
> Since for Dom0 we don't call domain_crash(), we must not bypass
> alloc_page_type() here. And even for DomU it would seem at least fragile if we
> did - we better don't alter the refcounting behavior.
> 

A little bit confused.
Check it,  for this __get_page_type(), can I leave my modification as is?

> >> --- a/xen/arch/x86/mm/p2m-ept.c
> >> +++ b/xen/arch/x86/mm/p2m-ept.c
> >> @@ -821,6 +821,8 @@ out:
> >>      if ( needs_sync )
> >>          ept_sync_domain(p2m);
> >>
> >> +    ret = 0;
> >> +
> >>      /* For host p2m, may need to change VT-d page table.*/
> >>      if ( rc == 0 && p2m_is_hostp2m(p2m) && need_iommu(d) &&
> >>           need_modify_vtd_table )
> >> @@ -831,11 +833,29 @@ out:
> >>          {
> >>              if ( iommu_flags )
> >>                  for ( i = 0; i < (1 << order); i++ )
> >> -                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, 
> >> iommu_flags);
> >> +                {
> >> +                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i,
> >> + iommu_flags);
> >> +
> >> +                    if ( !ret && unlikely(rc) )
> >
> > I think you should move check of ret before iommu_map_page, since we
> > should stop map against any error (either from best-effort unmap error
> side).
> 
> Considering ret getting set to zero ahead of the loop plus ...
> 
> >> +                    {
> >> +                        while ( i-- )
> >> +                            iommu_unmap_page(d, gfn + i);
> >> +
> >> +                        ret = rc;
> >> +                        break;
> 
> ... this, it looks to me as if the checking of ret above is simply 
> unnecessary.
> 

Make sense. I'll drop ret check.

Quan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.