[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Buffers not reachable by PCI



On Tue, Dec 13, 2011 at 10:17:50PM +0000, Taylor, Neal E wrote:
> 
> Is it the translation that's in error?
> 
> Modeled after the translation in xen_swiotlb_dma_supported that's used for 
> the problematic comparison, I added "the same" translation to swiotlb_print. 
> I don't understand the results as dend - dstart is vastly larger that pend - 
> pstart.

You might want to instrument the xen_swiotlb_fixup code to get an idea.

But basically there are "chunks" of 2MB (I think) of contingous memory
that is swizzled in to the memory that starts at io_tlb_start. But
all of that memory SHOULD be under the 4GB limit (set by max_dma_bits).

Sadly in your case one of those "chunks" ends up being past the 4GB
limit - which should never happen. Or if it did happen it would print
out "Failed to get contiguous memory for DMA from.."

But you don't get any of that.


To get a good idea of this, you could do something like this

unsigned long mfn, next_mfn;

mfn= PFN_DOWN(phys_to_machine(XPADDR(pstart)).maddr);

for (i = pstart; i < pend;) {
        next_mfn = PFN_DOWN(phys_to_machine(XPADDR(i)).maddr);
        if (next_mfn == mfn+1) {
                mfn++;
        } else {
                printk(KERN_INFO "MFN 0x%lx->0x%lx\n", mfn, next_mfn);
                mfn = next_mfn;
        }
        i+=PAGE_SIZE;
}

which should print you those "chunks", if my logic here is right.


Can you send me your 'xl info' (or 'xl dmesg'), please?

I tried to reproduce this with a 3.0.4 kernel on a 8GB and I couldn't
reproduce this. Hm, will look in your .config in case there is something
funky there.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.