[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND RFC 1/8] mm: Separate free page chunk merging into its own routine



On 27/02/17 00:37, Boris Ostrovsky wrote:
> +static void merge_chunks(struct page_info *pg, unsigned int node,
> +                         unsigned int zone, unsigned int order)
> +{
> +    ASSERT(spin_is_locked(&heap_lock));
> +
> +    /* Merge chunks as far as possible. */
> +    while ( order < MAX_ORDER )
> +    {
> +        unsigned int mask = 1UL << order;

This was unsigned long before.  If order is guaranteed never to be
larger than 31, we are ok.  If not, it needs correctly.

~Andrew

>  /* Free 2^@order set of pages. */
>  static void free_heap_pages(
>      struct page_info *pg, unsigned int order)
>  {
> -    unsigned long mask, mfn = page_to_mfn(pg);
> +    unsigned long mfn = page_to_mfn(pg);
>      unsigned int i, node = phys_to_nid(page_to_maddr(pg)), tainted = 0;
>      unsigned int zone = page_to_zone(pg);
>  
> @@ -977,38 +1024,7 @@ static void free_heap_pages(
>          midsize_alloc_zone_pages = max(
>              midsize_alloc_zone_pages, total_avail_pages / 
> MIDSIZE_ALLOC_FRAC);
>  
> -    /* Merge chunks as far as possible. */
> -    while ( order < MAX_ORDER )
> -    {
> -        mask = 1UL << order;
> -


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.