[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 01/10] vt-d: fix the IOMMU flush issue



On May 23, 2016 11:43 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>> On 23.05.16 at 17:22, <quan.xu@xxxxxxxxx> wrote:
> > On May 23, 2016 9:31 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >> >>> On 18.05.16 at 10:08, <quan.xu@xxxxxxxxx> wrote:
> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> >> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
> >> >      }
> >> >  }
> >> >
> >> > -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long
> gfn,
> >> > -        int dma_old_pte_present, unsigned int page_count)
> >> > +static int __intel_iommu_iotlb_flush(struct domain *d, unsigned long
> gfn,
> >> > +                                     bool_t dma_old_pte_present,
> >> > +                                     unsigned int page_count)
> >>
> >> I realize you say so in the overview mail, but the continuing lack of
> >> __must_check here causes review trouble again. And I have a hard time
> >> seeing how adding these annotations right away would "disrupt the
> >> order", as long as the series is properly ordered / broken up.
> >>
> >
> > If I add __must_check annotations here right now, e.g.
> >
> > -static void intel_iommu_iotlb_flush()
> > +static int __must_check iommu_flush_iotlb_pages()
> >
> > ...
> >
> > @@ -179,8 +179,9 @@ struct iommu_ops {
> > -    void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int
> > page_count);
> > +    int __must_check (*iotlb_flush)(struct domain *d, unsigned long
> > + gfn,
> > unsigned int page_count);
> > ...
> > }
> >
> > Should belong  to here too.
> 
> Correct. And where's the problem?
>

IMO it is not a big deal..

I think this makes this patch 1 fat.. why not focus on the positive propagation 
value from IOMMU flush interfaces in this patch.
If we are clear I will add annotation and rename them in another patches, it is 
acceptable to me.

Furthermore, we also need to add (from patch 4):

-static void dma_pte_clear_one(struct domain *domain, u64 addr)
+static int __must_check dma_pte_clear_one(struct domain *domain, u64 addr)
{
...
-        __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
+        rc = __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
...
}

In this patch.

 
> >> > +                                    DMA_CCMD_MASK_NOBIT, 1);
> >> > +
> >> > +    /*
> >> > +     * The current logic for rc returns:
> >> > +     *   - positive  invoke iommu_flush_write_buffer to flush cache.
> >> > +     *   - zero      success.
> >> > +     *   - negative  failure. Continue to flush IOMMU IOTLB on a best
> >> > +     *               effort basis.
> >> > +     */
> >> > +    if ( rc <= 0 )
> >> >      {
> >> >          int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> >> > -        iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> >> > +
> >> > +        rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
> >>
> >> If rc was negative before this call, you may end up returning success
> > without
> >> having been successful. Furthermore I think it was you who last time
> >> round reminded me that
> >> iommu_flush_iotlb_dsi() can also return 1, which you don't take care of.
> >>
> >
> > Yes, the iommu_flush_iotlb_dsi() can also return 1.
> > Look at the call tree, at the beginning of
> > flush_context_qi()/flush_iotlb_qi(), or
> > flush_context_reg()/flush_iotlb_reg()..
> >
> > If rc was negative when we call iommu_flush_context_device(), it is
> > impossible to return 1 for iommu_flush_iotlb_dsi().
> 
> This is far from obvious, so please add a respective ASSERT() if you want to
> rely on that.
> 
> > IMO, furthermore, this should not belong  to comment.
> 
> ???

Think twice, I will add comments and a respective ASSERT()..

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.