[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/7] xen/balloon: account for pages released during memory setup



On Thu, Sep 15, 2011 at 01:29:24PM +0100, David Vrabel wrote:
> From: David Vrabel <david.vrabel@xxxxxxxxxx>
> 
> In xen_memory_setup() pages that occur in gaps in the memory map are
> released back to Xen.  This reduces the domain's current page count in
> the hypervisor.  The Xen balloon driver does not correctly decrease
> its initial current_pages count to reflect this.  If 'delta' pages are
> released and the target is adjusted the resulting reservation is
> always 'delta' less than the requested target.
> 
> This affects dom0 if the initial allocation of pages overlaps the PCI
> memory region but won't affect most domU guests that have been setup
> with pseudo-physical memory maps that don't have gaps.
> 
> Fix this by accouting for the released pages when starting the balloon
> driver.

Does this make the behaviour of the pvops guest be similar to the
old-style XenOLinux? If so, perhaps we should include that in the git
description for usability purposes (ie, when somebody searches the git
log for what has happened in v3.2 Linux).
> 
> If the domain's targets are managed by xapi, the domain may eventually
> run out of memory and die because xapi currently gets its target
> calculations wrong and whenever it is restarted it always reduces the
> target by 'delta'.

> 
> Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
> ---
>  arch/x86/xen/setup.c  |    7 ++++++-
>  drivers/xen/balloon.c |    4 +++-
>  include/xen/page.h    |    2 ++
>  3 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 46d6d21..c983717 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -39,6 +39,9 @@ extern void xen_syscall32_target(void);
>  /* Amount of extra memory space we add to the e820 ranges */
>  phys_addr_t xen_extra_mem_start, xen_extra_mem_size;
>  
> +/* Number of pages released from the initial allocation. */
> +unsigned long xen_released_pages;
> +
>  /* 
>   * The maximum amount of extra memory compared to the base size.  The
>   * main scaling factor is the size of struct page.  At extreme ratios
> @@ -313,7 +316,9 @@ char * __init xen_memory_setup(void)
>                       extra_pages = 0;
>       }
>  
> -     extra_pages += xen_return_unused_memory(xen_start_info->nr_pages, 
> &e820);
> +     xen_released_pages = xen_return_unused_memory(xen_start_info->nr_pages,
> +                                                   &e820);
> +     extra_pages += xen_released_pages;
>  
>       /*
>        * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 5dfd8f8..4f59fb3 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -565,7 +565,9 @@ static int __init balloon_init(void)
>  
>       pr_info("xen/balloon: Initialising balloon driver.\n");
>  
> -     balloon_stats.current_pages = xen_pv_domain() ? 
> min(xen_start_info->nr_pages, max_pfn) : max_pfn;
> +     balloon_stats.current_pages = xen_pv_domain()
> +             ? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
> +             : max_pfn;
>       balloon_stats.target_pages  = balloon_stats.current_pages;
>       balloon_stats.balloon_low   = 0;
>       balloon_stats.balloon_high  = 0;
> diff --git a/include/xen/page.h b/include/xen/page.h
> index 0be36b9..92b61f8 100644
> --- a/include/xen/page.h
> +++ b/include/xen/page.h
> @@ -5,4 +5,6 @@
>  
>  extern phys_addr_t xen_extra_mem_start, xen_extra_mem_size;
>  
> +extern unsigned long xen_released_pages;
> +
>  #endif       /* _XEN_PAGE_H */
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.