[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen/balloon: Fix crash when ballooning on x86 32 bit PAE



These two have been applied to for-linus-4.6, thanks.

I tagged them for stable since they are regression in 4.4.

On 17/03/16 16:52, Ross Lagerwall wrote:
> When ballooning on an x86 32 bit PAE system with close to 64 GiB of memory, 
> the
> address returned by allocate_resource may be above 64 GiB.  When using
> CONFIG_SPARSEMEM, this setup is limited to using physical addresses < 64 GiB.
> When adding memory at this address, it runs off the end of the mem_section
> array and causes a crash.  Instead, fail the ballooning request.
> 
> Signed-off-by: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx>
> ---
>  drivers/xen/balloon.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 12eab50..329695d 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -152,6 +152,8 @@ static DECLARE_WAIT_QUEUE_HEAD(balloon_wq);
>  static void balloon_process(struct work_struct *work);
>  static DECLARE_DELAYED_WORK(balloon_worker, balloon_process);
>  
> +static void release_memory_resource(struct resource *resource);
> +
>  /* When ballooning out (allocating memory to return to Xen) we don't really
>     want the kernel to try too hard since that can trigger the oom killer. */
>  #define GFP_BALLOON \
> @@ -268,6 +270,19 @@ static struct resource 
> *additional_memory_resource(phys_addr_t size)
>               return NULL;
>       }
>  
> +#ifdef CONFIG_SPARSEMEM
> +     {
> +             unsigned long max_pfn = 1UL << (MAX_PHYSMEM_BITS - PAGE_SHIFT);

I changed max_pfn to limit, to avoid confusion with the global max_pfn.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.