[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [linux-2.6.18-xen] Xen dma: avoid unnecessarily SWIOTLB bounce buffering.
# HG changeset patch # User Keir Fraser <keir.fraser@xxxxxxxxxx> # Date 1206955945 -3600 # Node ID 5486a234923da1fbab13eef6165f25c54ab63bd9 # Parent 171ffa6bf3a51ddf83d559c4d9312a3603497ff1 Xen dma: avoid unnecessarily SWIOTLB bounce buffering. On Xen kernels, BIOVEC_PHYS_MERGEABLE permits merging of disk IOs that span multiple pages, provided that the pages are both pseudophysically- AND machine-contiguous --- (((bvec_to_phys((vec1)) + (vec1)->bv_len) == bvec_to_phys((vec2))) && \ ((bvec_to_pseudophys((vec1)) + (vec1)->bv_len) == \ bvec_to_pseudophys((vec2)))) However, this best-effort merging of adjacent pages can occur in regions of dom0 memory which just happen, by virtue of having been initially set up that way, to be machine-contiguous. Such pages which occur outside of a range created by xen_create_contiguous_ region won't be seen as contiguous by range_straddles_page_boundary(), so the pci-dma-xen.c code for dma_map_sg() will send these regions to the swiotlb for bounce buffering. This patch adds a new check, check_pages_physically_contiguous(), to the test for pages stradding page boundaries both in swiotlb_map_sg() and dma_map_sg(), to capture these ranges and map them directly via virt_to_bus() mapping rather than through the swiotlb. Signed-off-by: Stephen Tweedie <sct@xxxxxxxxxx> --- arch/i386/kernel/pci-dma-xen.c | 33 ++++++++++++++++++++++++++++ include/asm-i386/mach-xen/asm/dma-mapping.h | 8 ------ 2 files changed, 34 insertions(+), 7 deletions(-) diff -r 171ffa6bf3a5 -r 5486a234923d arch/i386/kernel/pci-dma-xen.c --- a/arch/i386/kernel/pci-dma-xen.c Fri Mar 28 14:27:38 2008 +0000 +++ b/arch/i386/kernel/pci-dma-xen.c Mon Mar 31 10:32:25 2008 +0100 @@ -76,6 +76,39 @@ do { \ BUG(); \ } \ } while (0) + +static int check_pages_physically_contiguous(unsigned long pfn, + unsigned int offset, + size_t length) +{ + unsigned long next_mfn; + int i; + int nr_pages; + + next_mfn = pfn_to_mfn(pfn); + nr_pages = (offset + length + PAGE_SIZE-1) >> PAGE_SHIFT; + + for (i = 1; i < nr_pages; i++) { + if (pfn_to_mfn(++pfn) != ++next_mfn) + return 0; + } + return 1; +} + +int range_straddles_page_boundary(paddr_t p, size_t size) +{ + extern unsigned long *contiguous_bitmap; + unsigned long pfn = p >> PAGE_SHIFT; + unsigned int offset = p & ~PAGE_MASK; + + if (offset + size <= PAGE_SIZE) + return 0; + if (test_bit(pfn, contiguous_bitmap)) + return 0; + if (check_pages_physically_contiguous(pfn, offset, size)) + return 0; + return 1; +} int dma_map_sg(struct device *hwdev, struct scatterlist *sg, int nents, diff -r 171ffa6bf3a5 -r 5486a234923d include/asm-i386/mach-xen/asm/dma-mapping.h --- a/include/asm-i386/mach-xen/asm/dma-mapping.h Fri Mar 28 14:27:38 2008 +0000 +++ b/include/asm-i386/mach-xen/asm/dma-mapping.h Mon Mar 31 10:32:25 2008 +0100 @@ -22,13 +22,7 @@ address_needs_mapping(struct device *hwd return (addr & ~mask) != 0; } -static inline int -range_straddles_page_boundary(paddr_t p, size_t size) -{ - extern unsigned long *contiguous_bitmap; - return ((((p & ~PAGE_MASK) + size) > PAGE_SIZE) && - !test_bit(p >> PAGE_SHIFT, contiguous_bitmap)); -} +extern int range_straddles_page_boundary(paddr_t p, size_t size); #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |