|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH v5 4/8] iommu: make map and unmap take a page count, similar to flush
> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 10 September 2020 13:48
> To: Paul Durrant <paul@xxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Paul Durrant <pdurrant@xxxxxxxxxx>;
> Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Roger Pau Monné
> <roger.pau@xxxxxxxxxx>; George
> Dunlap <george.dunlap@xxxxxxxxxx>; Ian Jackson <ian.jackson@xxxxxxxxxxxxx>;
> Julien Grall
> <julien@xxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; Jun Nakajima
> <jun.nakajima@xxxxxxxxx>;
> Kevin Tian <kevin.tian@xxxxxxxxx>
> Subject: Re: [PATCH v5 4/8] iommu: make map and unmap take a page count,
> similar to flush
>
> On 07.09.2020 09:40, Paul Durrant wrote:
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -2966,9 +2966,11 @@ static int _get_page_type(struct page_info *page,
> > unsigned long type,
> > mfn_t mfn = page_to_mfn(page);
> >
> > if ( (x & PGT_type_mask) == PGT_writable_page )
> > - rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)),
> > PAGE_ORDER_4K);
> > + rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)),
> > + 1ul << PAGE_ORDER_4K);
> > else
> > - rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn,
> > PAGE_ORDER_4K,
> > + rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn,
> > + 1ul << PAGE_ORDER_4K,
> > IOMMUF_readable | IOMMUF_writable);
> >
> > if ( unlikely(rc) )
>
> A few hundred lines up from here there is
>
> int rc2 = iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K);
>
> in cleanup_page_mappings().
Oh yes. Fixed.
>
> > --- a/xen/common/grant_table.c
> > +++ b/xen/common/grant_table.c
> > @@ -1225,7 +1225,7 @@ map_grant_ref(
> > kind = IOMMUF_readable;
> > else
> > kind = 0;
> > - if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0, kind) )
> > + if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 1ul,
> > kind) )
> > {
> > double_gt_unlock(lgt, rgt);
> > rc = GNTST_general_error;
> > @@ -1479,9 +1479,9 @@ unmap_common(
> >
> > kind = mapkind(lgt, rd, op->mfn);
> > if ( !kind )
> > - err = iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 0);
> > + err = iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 1ul);
> > else if ( !(kind & MAPKIND_WRITE) )
> > - err = iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0,
> > + err = iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 1ul,
>
> For all three of these, I guess either 1ul << PAGE_ORDER_4K or simply 1?
> (Given that the code didn't use PAGE_ORDER_4K so far, I'd slightly
> prefer the latter. I'd be fine making the change while committing, but
> it looks like v6 is going to be needed anyway.)
Ok. Changed.
>
> > --- a/xen/drivers/passthrough/amd/iommu_map.c
> > +++ b/xen/drivers/passthrough/amd/iommu_map.c
> > @@ -362,7 +362,7 @@ static unsigned long flush_count(unsigned long dfn,
> > unsigned int page_count,
> > }
> >
> > int amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn,
> > - unsigned int page_count,
> > + unsigned long page_count,
>
> This ought to be accompanied by a similar change to its flush_count()
> helper.
>
> > @@ -632,7 +632,7 @@ static int __must_check iommu_flush_iotlb(struct domain
> > *d, dfn_t dfn,
>
> > static int __must_check iommu_flush_iotlb_pages(struct domain *d,
> > dfn_t dfn,
> > - unsigned int page_count,
> > + unsigned long page_count,
> > unsigned int flush_flags)
> > {
> > ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
>
> This similarly ought to be accompanied by a change to its
> iommu_flush_iotlb() helper.
>
It is... in the previous hunk.
Paul
> Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |