[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Assigning contiguous memory to a driver domain



On Tue, Sep 21, 2010 at 01:28:30PM +0200, Joanna Rutkowska wrote:
> On 09/20/10 21:48, Konrad Rzeszutek Wilk wrote:
> 
> >>> If so, it requires nonzero Xen free memory ? And that is why when I do
> >>> "ifconfig eth0 down; ifconfig eth0 up" in the driver domain the second one
> >>> fails ?
> > 
> > There are couple of things happening when you do ifconfig eth0 up. The 
> > PTE used for virtual address for the BARs are updated with the _PAGE_IOMAP
> > which means that the GMFN->PFN->MFN is shortcircuited to be GMFN->MFN.
> > Obviously that doesn' use any Xen heap memory. The next thing is
> > that the the driver might allocate coherent DMA mappings. Those are the
> > ones I think Jan is referring to. For coherent DMA mappings we just
> > do page_alloc and then we swap the memory behind those pages with Xen
> > to be under the 32-bit limit (xen_create_contiguous_region).
> > Naturally when the driver is unloaded the de-allocation will call
> > xen_destroy_contiguous_region. Loking at the code I think it swaps with
> > the highest bit order (so with memory close to the end of physical space).
> > 
> > 
> 
> A coherent DMA mapping == continues mfns, right?

>From a simple standpoint - yes. If you dig in deeper it is important on Alpha
platforms due to aliasing, but on X86 it really does not matter.

> 
> So, is there a way to assure that this page_alloc for coherent DMA
> mapping *always* succeeds for a given domain, assuming it succeeded at
> least once (at its startup)?

No. But the driver need not have to use the coherent DMA API calls.
As a matter of fact, I am not sure if doing 'ifdown' would release
all of the coherent DMA APIs. It really depends on how the driver
was written.

> 
> >>
> >> Generally the second "up" shouldn't fail as long as the prior "down"
> >> properly returned all resources. See the restrictions above.
> > 
> > Yeah, it might be worth looking at what it is doing to cause this. The 
> > e1000/igb
> > are pretty good at cleaning everying so you can do ifup;ifdown indefinitly.
> > 
> 
> But if they are so good at cleaning everything as you say, then wouldn't
> that mean they are giving back the continues mfns back to Xen, which
> makes it possible that they will no longer be available when we do ifup
> next time (because e.g. some other drv domain will use them this time)?

No. They might have no need for coherent DMA mapping and instead just
call pci_map_page whenever they need. The pci_map_page takes care of
all of the intricate details of assuring that the page is visible by
PCI bus and that it can do DMA operations on it.

There are two types of operations here. One is the pci_alloc_coherent
where when you are done you return back the buffers (which makes a hypercall).
The other is to use pci_map_page, which can use the pool of contingous MFNs
that SWIOTLB has allocated - that pool is not returned to Xen unless the domain
is terminated.

That SWIOTLB buffer is 64MB and is static - it does not grow nor shrink during
the lifetime of the guest.
> 
> > In reference to the Xen-SWIOTLB for other versions that upstream, there are 
> > a couple
> > of implementations at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb-2.6.git
> > 
> > for different Linux versions. Which version of kernel do you guys use?
> 
> We use 2.6.34.1 with OpenSuse xenlinux pacthes (sorry guys, we decided
> to switch to xenlinux some time ago for better suspend and DRM).

Oh, sad to hear that as most of the DRM stuff has/is fixed now. This is the list
of hardware I've had a chance to test:

http://wiki.xensource.com/xenwiki/XenPVOPSDRM

Thought the suspend path needs a bit of loving (I've had problems on AMD 
but not that much on Intel). Which hardware did you have trouble with?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.