[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] high memory dma update: up against a wall

  • To: "Scott Parish" <srparish@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Nakajima, Jun" <jun.nakajima@xxxxxxxxx>
  • Date: Tue, 12 Jul 2005 21:36:10 -0700
  • Delivery-date: Wed, 13 Jul 2005 04:34:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcWHDld6PqTVWNiTS9OLf+9RCCTRfwAUcmqA
  • Thread-topic: [Xen-devel] high memory dma update: up against a wall

Scott Parish wrote:
> I've been slowly working on the dma problem i ran into; thought i was
> making progress, but i think i'm up against a wall, so more discussion
> and ideas might be helpful.

I think porting swiotlb (arch/ia64/lib/swiotlb.c) is one of other
approaches for EM64T as we are using it in the native x86_64 Linux. We
need at least 64MB physically contiguous memory below 4GB for that. For
dom0, I think we can find such area at boot time. 

We have a plan to work on that, but it will be after OLS...

Basically, the io_tlb_start is the starting address of the buffer. You
need to ensure that the memory is physically contiguous in machine
physical. I think it's easy to find such an area in dom0.
alloc_bootmem_low_pages() may not work, so you may need to write a new
(simple) function.

swiotlb_init_with_default_size (size_t default_size)
        unsigned long i;

        if (!io_tlb_nslabs) {
                io_tlb_nslabs = (default_size >> PAGE_SHIFT);
                io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);

         * Get IO TLB memory from the low pages
        io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
                                               (1 << IO_TLB_SHIFT));

Other thing is to use virt_to_bus() not virt_to_phys(). See below.

void *
swiotlb_alloc_coherent(struct device *hwdev, size_t size,
                       dma_addr_t *dma_handle, int flags)
        unsigned long dev_addr;
        void *ret;
        int order = get_order(size);

         * XXX fix me: the DMA API should pass us an explicit DMA mask
         * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a
         * bit range instead of a 16MB one).
        flags |= GFP_DMA;

        ret = (void *)__get_free_pages(flags, order);
        if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
                 * The allocated memory isn't reachable by the device.
                 * Fall back on swiotlb_map_single().
            free_pages((unsigned long) ret, order);
                ret = NULL;

The baisc idea of swiotlb is if the memory allocate is lower than 4GB,
then just use it. If not, allocate memory chunk from the buffer:

        if (!ret) {
                 * We are either out of memory or the device can't DMA
                 * to GFP_DMA memory; fall back on
                 * swiotlb_map_single(), which will grab memory from
                 * the lowest available address range.
                dma_addr_t handle;
                handle = swiotlb_map_single(NULL, NULL, size,
                if (dma_mapping_error(handle))
                        return NULL;

                ret = phys_to_virt(handle);

Intel Open Source Technology Center

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.