[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Vmap allocator fails to allocate beyond 128MB



>>> On 26.09.14 at 14:17, <vijay.kilari@xxxxxxxxx> wrote:
>   When devices like SMMU request large ioremap space and if the total 
> allocation
> of vmap space is beyond 128MB the allocation fails for next requests
> and following warning is seen
> 
> create_xen_entries: trying to replace an existing mapping
> addr=40001000 mfn=fffd6
> 
> I found that vm_top is allocated with only 1 page which can hold
> bitmap for only 128MB
> space though 1GB of vmap space is assigned.
> 
> With 1GB vmap space following are the calculations
> 
> vm_base = 0x4000000
> vm_end = 0x3ffff
> vm_low = 0x8
> nr = 1
> vm_top = 0x8000
> 
> With the below patch, I could get allocations beyond 128MB.
> 
> where nr = 8 for 1GB vmap space
> 
> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> index 783cea3..369212d 100644
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -27,7 +27,7 @@ void __init vm_init(void)
>      vm_base = (void *)VMAP_VIRT_START;
>      vm_end = PFN_DOWN(arch_vmap_virt_end() - vm_base);
>      vm_low = PFN_UP((vm_end + 7) / 8);
> -    nr = PFN_UP((vm_low + 7) / 8);
> +    nr = PFN_UP((vm_end + 7) / 8);
>      vm_top = nr * PAGE_SIZE * 8;
> 
>      for ( i = 0, va = (unsigned long)vm_bitmap; i < nr; ++i, va += PAGE_SIZE 
> )

Maybe there's a bug somewhere, but what you suggest as a change
above doesn't look correct: You make nr == vm_low, and hence the
map_pages_to_xen() after the loop do nothing. That can't be right.
Is it perhaps that this second map_pages_to_xen() doesn't have the
intended effect on ARM?

In any event is allocating just a single page for the bitmap initially
the expected behavior. Further bitmap pages will get allocated on
demand in vm_alloc().

Jan

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.