[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/p2m: fix extra memory regions accounting



Hello,

El 04/09/15 a les 7.07, Juergen Gross ha escrit:
> Could you try the attached patch? It should do the job. It is booting
> fine on my laptop, but I think you should try it on the machine with
> the memory ranges not at page boundary.
> 
> 
> Juergen
> 
> 
> extramem.patch
> 
> 
> commit 3d0f8aa4d1b4c9c16a81902a197b5d6a77e182a0
> Author: Juergen Gross <jgross@xxxxxxxx>
> Date:   Thu Sep 3 17:44:04 2015 +0200
> 
>     xen: switch extra memory accounting to use pfns
>     
>     Instead of using physical addresses for accounting of extra memory
>     areas available for ballooning switch to pfns as this is much less
>     error prone regarding partial pages.

Thanks for taking care of this! I've tested it on the box that has
non-page aligned memory ranges and it works fine, only a couple of
comments below.

>     Reported-by: Roger Pau MonnÃÂ <roger.pau@xxxxxxxxxx>
>     Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 7a5d566..aa58bc4 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -90,62 +90,65 @@ static void __init xen_parse_512gb(void)
>       xen_512gb_limit = val;
>  }
>  
> -static void __init xen_add_extra_mem(phys_addr_t start, phys_addr_t size)
> +static void __init xen_add_extra_mem(unsigned long start_pfn,
> +                                  unsigned long n_pfns)

Not very important, but for type consistency this should probably be
xen_pfn_t instead of unsigned long here and below.

>  {
>       int i;
>  
>       for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
>               /* Add new region. */
> -             if (xen_extra_mem[i].size == 0) {
> -                     xen_extra_mem[i].start = start;
> -                     xen_extra_mem[i].size  = size;
> +             if (xen_extra_mem[i].n_pfns == 0) {
> +                     xen_extra_mem[i].start_pfn = start_pfn;
> +                     xen_extra_mem[i].n_pfns = n_pfns;
>                       break;
>               }
>               /* Append to existing region. */
> -             if (xen_extra_mem[i].start + xen_extra_mem[i].size == start) {
> -                     xen_extra_mem[i].size += size;
> +             if (xen_extra_mem[i].start_pfn + xen_extra_mem[i].n_pfns ==
> +                 start_pfn) {
> +                     xen_extra_mem[i].n_pfns += n_pfns;
>                       break;
>               }

I also noticed this with the original code, why isn't there a case to
prepend to an existing region:

if (start_pfn + n_pfns == xen_extra_mem[i].start_pfn) {
    xen_extra_mem[i].n_pfns += n_pfns;
    xen_extra_mem[i].start_pfn = start_pfn;
}

>       }
>       if (i == XEN_EXTRA_MEM_MAX_REGIONS)
>               printk(KERN_WARNING "Warning: not enough extra memory 
> regions\n");
>  
> -     memblock_reserve(start, size);
> +     memblock_reserve(PFN_PHYS(start_pfn), PFN_PHYS(n_pfns));
>  }

[...]

> @@ -831,9 +833,11 @@ char * __init xen_memory_setup(void)
>                               chunk_size = min(size, mem_end - addr);
>                       } else if (extra_pages) {
>                               chunk_size = min(size, PFN_PHYS(extra_pages));
> -                             extra_pages -= PFN_DOWN(chunk_size);
> -                             xen_add_extra_mem(addr, chunk_size);
> -                             xen_max_p2m_pfn = PFN_DOWN(addr + chunk_size);
> +                             pfn_s = PFN_UP(addr);
> +                             n_pfns = PFN_DOWN(addr + chunk_size) - pfn_s;

Should xen_add_extra_mem check for empty ranges and bail out early, or
should the caller make sure it doesn't try to add empty ranges?

IMHO it's easier and cleaner to add the check to xen_add_extra_mem.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.