[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] iommu / p2m: add a page_order parameter to iommu_map/unmap_page()
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 30 October 2018 16:08 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Julien Grall <julien.grall@xxxxxxx>; Andrew Cooper > <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Wei > Liu <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Jun > Nakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>; > Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-devel <xen- > devel@xxxxxxxxxxxxxxxxxxxx>; Konrad Rzeszutek Wilk > <konrad.wilk@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx> > Subject: Re: [PATCH v2] iommu / p2m: add a page_order parameter to > iommu_map/unmap_page() > > >>> On 29.10.18 at 14:29, <paul.durrant@xxxxxxxxxx> wrote: > > --- a/xen/common/grant_table.c > > +++ b/xen/common/grant_table.c > > @@ -1142,12 +1142,14 @@ map_grant_ref( > > { > > if ( !(kind & MAPKIND_WRITE) ) > > err = iommu_map_page(ld, _dfn(mfn_x(mfn)), mfn, > > + PAGE_ORDER_4K, > > IOMMUF_readable | > IOMMUF_writable); > > } > > else if ( act_pin && !old_pin ) > > { > > if ( !kind ) > > err = iommu_map_page(ld, _dfn(mfn_x(mfn)), mfn, > > + PAGE_ORDER_4K, > > IOMMUF_readable); > > } > > if ( err ) > > @@ -1396,10 +1398,11 @@ unmap_common( > > > > kind = mapkind(lgt, rd, op->mfn); > > if ( !kind ) > > - err = iommu_unmap_page(ld, _dfn(mfn_x(op->mfn))); > > + err = iommu_unmap_page(ld, _dfn(mfn_x(op->mfn)), > > + PAGE_ORDER_4K); > > else if ( !(kind & MAPKIND_WRITE) ) > > err = iommu_map_page(ld, _dfn(mfn_x(op->mfn)), op->mfn, > > - IOMMUF_readable); > > + PAGE_ORDER_4K, IOMMUF_readable); > > > > double_gt_unlock(lgt, rgt); > > I am, btw, uncertain that using PAGE_ORDER_4K is correct here: > Other than in the IOMMU code, grant table code isn't tied to a > particular architecture, and hence ought to work fine on a port > to an architecture with 8k, 16k, or 32k pages. Would you suggest I add an arch specific #define for a grant table page order and then use that? > > > --- a/xen/drivers/passthrough/iommu.c > > +++ b/xen/drivers/passthrough/iommu.c > > @@ -305,47 +305,76 @@ void iommu_domain_destroy(struct domain *d) > > } > > > > int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, > > - unsigned int flags) > > + unsigned int page_order, unsigned int flags) > > { > > const struct domain_iommu *hd = dom_iommu(d); > > - int rc; > > + unsigned long i; > > > > if ( !iommu_enabled || !hd->platform_ops ) > > return 0; > > > > - rc = hd->platform_ops->map_page(d, dfn, mfn, flags); > > - if ( unlikely(rc) ) > > + ASSERT(!(dfn_x(dfn) & ((1ul << page_order) - 1))); > > + ASSERT(!(mfn_x(mfn) & ((1ul << page_order) - 1))); > > + > > + for ( i = 0; i < (1ul << page_order); i++ ) > > { > > + int ignored, err = hd->platform_ops->map_page(d, dfn_add(dfn, > i), > > + mfn_add(mfn, i), > > + flags); > > + > > + if ( likely(!err) ) > > + continue; > > + > > if ( !d->is_shutting_down && printk_ratelimit() ) > > printk(XENLOG_ERR > > "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" > failed: %d\n", > > - d->domain_id, dfn_x(dfn), mfn_x(mfn), rc); > > + d->domain_id, dfn_x(dfn_add(dfn, i)), > > + mfn_x(mfn_add(mfn, i)), err); > > + > > + while (i--) > > + /* assign to something to avoid compiler warning */ > > + ignored = hd->platform_ops->unmap_page(d, dfn_add(dfn, i)); > > Hmm, as said on v1 - please use the original mode (while-if-continue) > here. This lets you get away without a local variable that's never > read, and which hence future compiler versions may legitimately warn > about. > Ok, I clearly don't understand what you mean by 'while-if-continue' then. Above I have for-if-continue, which is what I thought you wanted. What code structure are you actually looking for? Paul > Jan > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |