[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Vmap allocator fails to allocate beyond 128MB



Hi Jan,

On Fri, Sep 26, 2014 at 6:16 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 26.09.14 at 14:17, <vijay.kilari@xxxxxxxxx> wrote:
>>   When devices like SMMU request large ioremap space and if the total
>> allocation
>> of vmap space is beyond 128MB the allocation fails for next requests
>> and following warning is seen
>>
>> create_xen_entries: trying to replace an existing mapping
>> addr=40001000 mfn=fffd6
>>
>> I found that vm_top is allocated with only 1 page which can hold
>> bitmap for only 128MB
>> space though 1GB of vmap space is assigned.
>>
>> With 1GB vmap space following are the calculations
>>
>> vm_base = 0x4000000
>> vm_end = 0x3ffff
>> vm_low = 0x8
>> nr = 1
>> vm_top = 0x8000
>>
>> With the below patch, I could get allocations beyond 128MB.
>>
>> where nr = 8 for 1GB vmap space
>>
>> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
>> index 783cea3..369212d 100644
>> --- a/xen/common/vmap.c
>> +++ b/xen/common/vmap.c
>> @@ -27,7 +27,7 @@ void __init vm_init(void)
>>      vm_base = (void *)VMAP_VIRT_START;
>>      vm_end = PFN_DOWN(arch_vmap_virt_end() - vm_base);
>>      vm_low = PFN_UP((vm_end + 7) / 8);
>> -    nr = PFN_UP((vm_low + 7) / 8);
>> +    nr = PFN_UP((vm_end + 7) / 8);
>>      vm_top = nr * PAGE_SIZE * 8;
>>
>>      for ( i = 0, va = (unsigned long)vm_bitmap; i < nr; ++i, va += 
>> PAGE_SIZE )
>
> Maybe there's a bug somewhere, but what you suggest as a change
> above doesn't look correct: You make nr == vm_low, and hence the
> map_pages_to_xen() after the loop do nothing. That can't be right.
> Is it perhaps that this second map_pages_to_xen() doesn't have the
> intended effect on ARM?

Note: I am testing on arm64 platform.

In the call map_pages_to_xen() after for loop is performing the mapping
for next vm_bitmap pages. In case of arm  this call will set valid bit
is set to 1 in pte
entry for this mapping.

void __init vm_init(void)
{
     ....
     for ( i = 0, va = (unsigned long)vm_bitmap; i < nr; ++i, va += PAGE_SIZE )
     {
         struct page_info *pg = alloc_domheap_page(NULL, 0);

         map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR);
         clear_page((void *)va);
     }
     bitmap_fill(vm_bitmap, vm_low);

     /* Populate page tables for the bitmap if necessary. */
     map_pages_to_xen(va, 0, vm_low - nr, MAP_SMALL_PAGES);
 }

In vma_alloc()  below map_pages_to_xen() is failing for >128MB because
for this next vm_bitmap
page the mapping is already set in vm_init(). So map_pages_to_xen() in
ARM returns error.

          if ( start >= vm_top )
         {
             unsigned long va = (unsigned long)vm_bitmap + vm_top / 8;

             if ( !map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR) )
           ...
         }

So my patch is making map_pages_to_xen() call in vm_init is not doing anything
because nr_mfns is 0 ( parameter 3 is 0).

Queries: 1) How is x86 is updating tables even if present/valid bit is set?
             2) Can we allocate all the pages required for vm_bitmap
in vm_init()?. we may be wasting few pages but
                this makes work for both x86&arm.
             3) Can we split vm_init() into generic and arch specific?

 Hi Ian,
       Can we one of following for arm
        1) Add new option in create_xen_entries() something like
RESERVE where valid bit is not set
            and map_pages_to_xen() should choose this option in case
of mfn is 0?
        2) Just return without doing an mapping if mfn is 0 in
map_page_to_xen()?

>
> In any event is allocating just a single page for the bitmap initially
> the expected behavior. Further bitmap pages will get allocated on
> demand in vm_alloc().
>
> Jan
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.