[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Crash while mapping a device's (large) memory region



On Fri, 23 Jun 2006 12:46:44 +0200
Quentin Garnier <qgarnier@xxxxxxxxxxxx> wrote:

> Hi,
> 
> I'm getting a crash trying to map more than 0x106 pages of a device's
> memory region.
> 
> I'm using Xen-unstable from a few days ago (monday, I think), so it's
> a 2.6.16.13 kernel.
> 
> I have a device which has a 32MB-long memory region.  I've written a
> very simple module to merely let userland mmap that region to expose
> the issue I'm seeing.  The module is basically down to that:
> 
> In probe():
>         base_addr = pci_resource_start(dev, 2);
> 
> In mmap():
>         remap_pfn_range(vma, vma->vm_start, base_addr >> PAGE_SHIFT,
>             vma->vm_end - vma->vm_start, vma->vm_page_prot);
> 
> Then I made a simple userland tool that mmap()s the device for a size
> given as an argument.  I could see that I can map up to 0x106000
> bytes, but trying one more page gets me a crash in dom0, apparently in
> hypercall_page, fore HYPERCALL_update_va_mapping().  It's hard to tell
> what exactly happens though, but the trace seems to indicate it comes
> from remap_pfn_range through remap_pte_range.
> 
> I really can't make sense of that value, and I don't think I
> understand enough of Xen's internal to go further with debugging
> without some help, so here I post, looking for clues about what
> happens.
> 
> The module work fines under an i386 kernel compiled from the same
> sources.
> 
> Then again, my use of remap_pfn_range might be wrong, but why that
> 0x106 number anyway (it's 1MB and 6 pages)?

Indeed using remap_pfn_range was wrong;  using io_remap_pfn_range makes
the thing work.

Quentin Garnier.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.