[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] iommu / p2m: add a page_order parameter to iommu_map/unmap_page()
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 26 October 2018 16:46 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Julien Grall <julien.grall@xxxxxxx>; Andrew Cooper > <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Wei > Liu <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Jun > Nakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>; > Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-devel <xen- > devel@xxxxxxxxxxxxxxxxxxxx>; Konrad Rzeszutek Wilk > <konrad.wilk@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx> > Subject: Re: [PATCH] iommu / p2m: add a page_order parameter to > iommu_map/unmap_page() > > >>> On 17.10.18 at 10:19, <paul.durrant@xxxxxxxxxx> wrote: > > @@ -781,28 +765,9 @@ guest_physmap_add_entry(struct domain *d, gfn_t > gfn, mfn_t mfn, > > int rc = 0; > > > > if ( !paging_mode_translate(d) ) > > - { > > - if ( need_iommu_pt_sync(d) && t == p2m_ram_rw ) > > - { > > - dfn_t dfn = _dfn(mfn_x(mfn)); > > - > > - for ( i = 0; i < (1 << page_order); i++ ) > > - { > > - rc = iommu_map_page(d, dfn_add(dfn, i), mfn_add(mfn, > i), > > - IOMMUF_readable|IOMMUF_writable); > > - if ( rc != 0 ) > > - { > > - while ( i-- > 0 ) > > - /* If statement to satisfy __must_check. */ > > - if ( iommu_unmap_page(d, dfn_add(dfn, i)) ) > > - continue; > > - > > - return rc; > > - } > > - } > > - } > > - return 0; > > - } > > + return (need_iommu_pt_sync(d) && t == p2m_ram_rw) ? > > + iommu_map_page(d, _dfn(mfn_x(mfn)), mfn, PAGE_ORDER_4K, > > page_order? Oh yes. Good spot. > > > --- a/xen/drivers/passthrough/iommu.c > > +++ b/xen/drivers/passthrough/iommu.c > > @@ -305,50 +305,71 @@ void iommu_domain_destroy(struct domain *d) > > } > > > > int iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, > > - unsigned int flags) > > + unsigned int page_order, unsigned int flags) > > { > > const struct domain_iommu *hd = dom_iommu(d); > > - int rc; > > + unsigned long i; > > > > if ( !iommu_enabled || !hd->platform_ops ) > > return 0; > > > > - rc = hd->platform_ops->map_page(d, dfn, mfn, flags); > > - if ( unlikely(rc) ) > > + for ( i = 0; i < (1ul << page_order); i++ ) > > Above here, can you please check (or assert) that the low bits of > dfn and mfn satisfy page_order? > Yes, that would be a good idea. > > { > > - if ( !d->is_shutting_down && printk_ratelimit() ) > > - printk(XENLOG_ERR > > - "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" > failed: %d\n", > > - d->domain_id, dfn_x(dfn), mfn_x(mfn), rc); > > + int ignored, rc = hd->platform_ops->map_page(d, dfn_add(dfn, > i), > > + mfn_add(mfn, i), > > + flags); > > > > - if ( !is_hardware_domain(d) ) > > - domain_crash(d); > > + if ( unlikely(rc) ) > > + { > > + while (i--) > > + { > > + /* assign to something to avoid compiler warning */ > > + ignored = hd->platform_ops->unmap_page(d, dfn_add(dfn, > i)); > > I have to admit that I'd prefer the original "if() continue;" approach. > Presumably to reduce nesting... I'll restructure it. > > + } > > + > > + if ( !d->is_shutting_down && printk_ratelimit() ) > > + printk(XENLOG_ERR > > + "d%d: IOMMU order %u mapping dfn %"PRI_dfn" to > mfn %"PRI_mfn" failed: %d\n", > > + d->domain_id, page_order, dfn_x(dfn), > mfn_x(mfn), > > + rc); > > + > > + if ( !is_hardware_domain(d) ) > > + domain_crash(d); > > + > > + return rc; > > + } > > } > > > > - return rc; > > + return 0; > > } > > > > -int iommu_unmap_page(struct domain *d, dfn_t dfn) > > +int iommu_unmap_page(struct domain *d, dfn_t dfn, unsigned int > page_order) > > { > > const struct domain_iommu *hd = dom_iommu(d); > > - int rc; > > + unsigned long i; > > > > if ( !iommu_enabled || !hd->platform_ops ) > > return 0; > > > > - rc = hd->platform_ops->unmap_page(d, dfn); > > - if ( unlikely(rc) ) > > + for ( i = 0; i < (1ul << page_order); i++ ) > > Check/assert above here again please. > Sure. > > { > > - if ( !d->is_shutting_down && printk_ratelimit() ) > > - printk(XENLOG_ERR > > - "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n", > > - d->domain_id, dfn_x(dfn), rc); > > + int rc = hd->platform_ops->unmap_page(d, dfn_add(dfn, i)); > > > > - if ( !is_hardware_domain(d) ) > > - domain_crash(d); > > + if ( unlikely(rc) ) > > + { > > + if ( !d->is_shutting_down && printk_ratelimit() ) > > + printk(XENLOG_ERR > > + "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: > %d\n", > > + d->domain_id, dfn_x(dfn), rc); > > + > > + if ( !is_hardware_domain(d) ) > > + domain_crash(d); > > + > > + return rc; > > This is not in line with the code you drop: Unmap should continue > the loop, and just record (and then report) the first (if any) error, > to be on the safe side if callers assume the unmap won't fail (which > can't be excluded despite the __must_check, as the code in > iommu_map_page() shows). Perhaps the logging then should also > be restricted to just the first case of error, and in any event please > with the correct dfn (or with the order also logged). > Ok. I guess unmapping as much as is possible for the h/w domain is reasonable. Any other domain will of course be crashed, so it would seem better to bail out of the loop early in that case. > > --- a/xen/include/xen/iommu.h > > +++ b/xen/include/xen/iommu.h > > @@ -89,9 +89,11 @@ void iommu_teardown(struct domain *d); > > #define IOMMUF_readable (1u<<_IOMMUF_readable) > > #define _IOMMUF_writable 1 > > #define IOMMUF_writable (1u<<_IOMMUF_writable) > > -int __must_check iommu_map_page(struct domain *d, dfn_t dfn, > > - mfn_t mfn, unsigned int flags); > > -int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn); > > +int __must_check iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, > > + unsigned int page_order, > > + unsigned int flags); > > Can't these two lines be combined? > Looks like it. I'll check. Paul > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |