[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/4] xen/arm: introduce XENMEM_cache_flush

On Thu, 2 Oct 2014, Jan Beulich wrote:
> >>> On 02.10.14 at 13:57, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > On Thu, 2 Oct 2014, Jan Beulich wrote:
> >> But again I don't see why Dom0 can't track state for the mappings
> >> it establishes - after all, a grant-ref would be the most natural
> >> thing to pass in here.
> > 
> > I agree it would be more natural to pass the grant-ref, but if Linux
> > knew the grant-ref it wouldn't need the hypercall: it would also know
> > the pfn and could just perform the flush on the pseudo-physical address.
> > 
> > See drivers/xen/swiotlb-xen.c: the dma_map_ops api only gives us an mfn
> > at unmap time. Maintaining an mfn_to_pfn (or to grant_ref) tree with
> > multiple entries per mfn is expensive. The tree would need to be global:
> > maintained for every grant map/unmap operation, that nowadays happen
> > extremely often with netfront/netback. In addition the lookup is not
> > very fast either. We would also need to take a lock to be able to handle
> > the multiple grants for the same page scenario.
> I don't follow: The first thing xen_unmap_single() does is get
> the physical address for the passed in DMA one.

On ARM that would just return the mfn again, because Linux doesn't track
mfn to pfn anymore.

> Furthermore, if only SWIOTLB code is of concern,
It is not: the swiotlb-xen functions are called in all cases but the
swiotlb internal buffer is only used as slow path when strictly necessary.
See for example xen_swiotlb_map_page:

if (dma_capable(dev, dev_addr, size) &&
    !range_straddles_page_boundary(phys, size) && !swiotlb_force) {

this check triggers the fast path.

> then this is limited
> size, and hence you could have a xen_io_tlb_nslabs-element array
> tracking whatever additional information you may need for each of the
> slabs.
> Jan

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.