[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Essay on an important Xen decision (long)
Gerd Hoffmann wrote: Hi,If the only issue for making dom0 VP is DMA, wouldn't it be easier to modify the Linux DMA subsystem[1] to make a special hypercall to essentially pin a VP to a particular MFN that could be used for the DMA?Linux has a nice API for DMA memory management, see Documentation/DMA-mapping.txt. Basically you pass in a "struct page" and a offset (within that page) and get back a dma address you can pass on to your hardware. That is required for some architectures where phyical addresses (as seen by the CPU) and bus addresses (as seen by the pci devices) are not identical. It's also needed on archs which have an iommu to create/delete mapping entries there.I think that API should do just fine for any DMA transfer dom0 wants to do for its own pages. xenlinux would simply need a special implementation of that API which calls xen to translate the VP address into a dma address (usually same as machine address). Probably xen must also handle a iommu (if present) to ensure secure dma once we have hardware which supports this. Excellent, thanks for the reference! A bit more tricky are DMA transfers for _other_ domains (i.e. what the blkback driver has to do). blkback maps the foreign domain pages into its own address space, and I think there is no way around that right now API-wise as otherwise there isn't a "struct page" for the page ... There are, of course, other ways around this. One could have a hypervisor level DMA API that allowed bulk transfer of memory between domains (either by copying or page flipping depending the size of the buffer). Another option would be a separate pool of sharable memory that could be mapped appropriately into a domain's VPM space. Regards, Anthony Liguori cheers, Gerd _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |