[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 07/11] Xen/x86/PCI: Add support for the Xen PCI subsytem



Joerg Roedel wrote:
+static inline int address_needs_mapping(struct device *hwdev,
+                                             dma_addr_t addr)
+{
+     dma_addr_t mask = 0xffffffff;

You can use DMA_32BIT_MASK here.

OK. Well, DMA_BIT_MASK(32), since I think DMA_XXBIT_MASK are considered deprecated.

+     int ret;
+
+     /* If the device has a mask, use it, otherwise default to 32 bits */
+     if (hwdev && hwdev->dma_mask)
+             mask = *hwdev->dma_mask;

I think the check for a valid hwdev->dma_mask is not necessary. Other
IOMMU drivers also don't check for this.

OK.

+
+static int range_straddles_page_boundary(phys_addr_t p, size_t size)
+{
+     unsigned long pfn = PFN_DOWN(p);
+     unsigned int offset = p & ~PAGE_MASK;
+
+     if (offset + size <= PAGE_SIZE)
+             return 0;

You can use iommu_num_pages here from lib/iommu_helpers.c

Ah, useful. Hm, but iommu_num_pages() takes the addr as an unsigned long, which will fail on a 32-bit machine with a 64-bit phys addr.

+static int xen_map_sg(struct device *hwdev, struct scatterlist *sg,
+                     int nents, int direction)
+{
+     struct scatterlist *s;
+     struct page *page;
+     int i, rc;
+
+     BUG_ON(direction == DMA_NONE);
+     WARN_ON(nents == 0 || sg[0].length == 0);
+
+     for_each_sg(sg, s, nents, i) {
+             BUG_ON(!sg_page(s));
+             page = sg_page(s);
+             s->dma_address = xen_dma_map_page(page) + s->offset;
+             s->dma_length = s->length;
+             IOMMU_BUG_ON(range_straddles_page_boundary(
+                             page_to_phys(page), s->length));

I have a question on this. How do you make sure that the memory to map
does not cross page boundarys? I have a stats counter for x-page
requests in amd iommu code and around 10% of the requests are actually
x-page for me.

I define a BIOVEC_PHYS_MERGEABLE() which prevents BIOs from being merged across non-contiguous pages. Hm, wonder where that change has gone? It should probably be part of this series...

+     }
+
+     rc = nents;
+
+     flush_write_buffers();
+     return rc;
+}
+
+static void xen_unmap_sg(struct device *hwdev, struct scatterlist *sg,
+                      int nents, int direction)
+{
+     struct scatterlist *s;
+     struct page *page;
+     int i;
+
+     for_each_sg(sg, s, nents, i) {
+             page = pfn_to_page(mfn_to_pfn(PFN_DOWN(s->dma_address)));
+             xen_dma_unmap_page(page);
+     }
+}
+
+static void *xen_alloc_coherent(struct device *dev, size_t size,
+                             dma_addr_t *dma_handle, gfp_t gfp)
+{
+     void *ret;
+     struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+     unsigned int order = get_order(size);
+     unsigned long vstart;
+     u64 mask;
+
+     /* ignore region specifiers */
+     gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
+
+     if (mem) {
+             int page = bitmap_find_free_region(mem->bitmap, mem->size,
+                                                  order);

Can you use iommu_area_alloc here?

There's a later patch in this series ("xen/swiotlb: use dma_alloc_from_coherent to get device coherent memory") which converts it to use dma_alloc_from_coherent(). I think that's the right thing to use here, rather than iommu_area_alloc().

+static void xen_free_coherent(struct device *dev, size_t size,
+                      void *vaddr, dma_addr_t dma_addr)
+{
+     struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+     int order = get_order(size);
+
+     if (mem && vaddr >= mem->virt_base &&
+         vaddr < (mem->virt_base + (mem->size << PAGE_SHIFT))) {
+             int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
+             bitmap_release_region(mem->bitmap, page, order);

iommu_area_free

I use dma_release_from_coherent() in the later patch.

Thanks,
   J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.