[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] Make x86_64 swiotlb code to support dma_ops [2/2]

  • To: "Keir Fraser" <keir@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Langsdorf, Mark" <mark.langsdorf@xxxxxxx>
  • Date: Wed, 28 Feb 2007 15:35:46 -0600
  • Delivery-date: Wed, 28 Feb 2007 13:35:12 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcdbdXGgPI0XllCdQlK5GGtYpF745wACUWEGAAAzDmA=
  • Thread-topic: [Xen-devel] [PATCH] Make x86_64 swiotlb code to support dma_ops [2/2]

> On 28/2/07 20:17, "Langsdorf, Mark" <mark.langsdorf@xxxxxxx> wrote:
> > The first patch creates the arch/x86_64/kernel/pci-dma-xen.c
> > file based on the standard pci-dma.c, and creates
> > arch/x86_64/kernel/swiotlb-xen.c based on
> > arch/i386/kernel/swiotlb.c.
> Do we really need to duplicate the swiotlb code? Did you need 
> to make big changes?

No, the only change I made was to remove the swiotlb declaration
and export, which could have been done in pci-swiotlb-xen instead.

I made the move in case there needed to be any x86_64 specific
changes in the future, it would be easier to do so.  I understand
the need to balance that against keeping the code bases in sync
for common fixes.  I don't have a preference one way or the other.

> Other points that I can see from a quick browse include the fact that
> alloc_coherent() still looks broken afaics

The i386 swiotlb implementation doesn't seem to have an
alloc_coherent().  Since all this patch is intended to do
is move swiotlb into the x86_64 directory, I'm not sure
how to resolve the broken implementation.

> and the one-liner in io_apic-xen.c is a bit random
> (since none of the other use-iommu-or-not
> decision points are changed, and the GART stuff which is 
> presumably what is being kludged around is not even
> included in Xen builds yet).

Thanks for catching that.  I'll fix it in the next

-Mark Langsdorf
AMD, Inc.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.