[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: mthca use of dma_sync_single is bogus
One thought is that if you *do* move to dma_sync_single_range() then lib/swiotlb.c still needs fixing. It's buggy in that swiotlb_sync_single_range(dma_addr, offset) calls swiotlb_sync_single(dma_addr+offset), and this will fail if the offset is large enough that it ends up dereferencing a different slot index in io_tlb_orig_addr. So, I should be able to get my swiotlb workaround fixes accepted upstream as a genuine bug fix. :-) dma_sync_single_range() looks to me to be the right thing for you to be using. But I'm not a DMA-API expert. -- Keir On 9/7/07 22:16, "Roland Dreier" <rdreier@xxxxxxxxx> wrote: > It seems the problems running mthca in a Xen domU have uncovered a bug > in mthca: mthca uses dma_sync_single in mthca_arbel_write_mtt_seg() > and mthca_arbel_map_phys_fmr() to sync the MTTs that get written. > However, Documentation/DMA-API.txt says: > > void > dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size, > enum dma_data_direction direction) > > synchronise a single contiguous or scatter/gather mapping. All the > parameters must be the same as those passed into the single mapping > API. > > and mthca is *not* following this clear rule: it is trying to sync > only a subrange of the mapping. Later on in the document, there is: > > void > dma_sync_single_range(struct device *dev, dma_addr_t dma_handle, > unsigned long offset, size_t size, > enum dma_data_direction direction) > > does a partial sync. starting at offset and continuing for size. You > must be careful to observe the cache alignment and width when doing > anything like this. You must also be extra careful about accessing > memory you intend to sync partially. > > but that is in a section dealing with non-consistent memory so it's > not entirely clear to me whether it's kosher to use this as mthca > wants. > > The other alternative is to put the MTT table in coherent memory just > like the MPT table. That might be the best solution I suppose... > > Michael, anyone else, thoughts on this? > > - R. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |