[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] x86/PoD: shorten certain operations on higher order ranges



On 09/15/2015 08:37 AM, Jan Beulich wrote:
> @@ -574,41 +576,46 @@ recount:
>       * + There are PoD entries to handle, or
>       * + There is ram left, and we want to steal it
>       */
> -    for ( i=0;
> -          i<(1<<order) && (pod>0 || (steal_for_cache && ram > 0));
> -          i++)
> +    for ( i = 0;
> +          i < (1UL << order) && (pod > 0 || (steal_for_cache && ram > 0));
> +          i += n )
>      {
>          mfn_t mfn;
>          p2m_type_t t;
>          p2m_access_t a;
> +        unsigned int cur_order;
>  
> -        mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, NULL, NULL);
> +        mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL);
> +        if ( order < cur_order )
> +            cur_order = order;
> +        n = 1UL << cur_order;
>          if ( t == p2m_populate_on_demand )
>          {
> -            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid,
> -                          p2m->default_access);
> -            p2m->pod.entry_count--;
> +            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> +                          p2m_invalid, p2m->default_access);
> +            p2m->pod.entry_count -= n;
>              BUG_ON(p2m->pod.entry_count < 0);
> -            pod--;
> +            pod -= n;
>          }
>          else if ( steal_for_cache && p2m_is_ram(t) )
>          {
>              struct page_info *page;
> +            unsigned int j;
>  
>              ASSERT(mfn_valid(mfn));
>  
>              page = mfn_to_page(mfn);
>  
> -            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid,
> -                          p2m->default_access);
> -            set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
> -
> -            p2m_pod_cache_add(p2m, page, 0);
> +            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order,
> +                          p2m_invalid, p2m->default_access);
> +            for ( j = 0; j < n; ++j )
> +                set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
> +            p2m_pod_cache_add(p2m, page, cur_order);
>  
>              steal_for_cache =  ( p2m->pod.entry_count > p2m->pod.count );

This code will now steal an entire 2MiB page even if we only need a few
individual pages, then free it below (calling p2m_pod_set_cache_target()).

Upon reflection, this is actually a feature -- if we have singleton
pages in the cache, we can free those first, hopefully keeping the 2MiB
page together.

It would be good to have a comment here to that effect, though; perhaps:

"If we need less than 1<<order, may end up stealing more memory here
than we actually need.  This will be rectified below, however; and
stealing too much and then freeing what we need may allow us to free
smaller pages from the cache, and avoid breaking up superpages."

It occurs to me that the steal_for_cache calculations are also wrong
here --it should be (p2m->pod.entry_count - pod > p2m->pod.count) --
i.e., we should steal if liabilities would be greater than assets
*after* this function completed, not before.

Otherwise, everything else looks good, thanks.

I assume you'll resubmit this when the patch it depends on leaves RFC?
In which case I'll wait until I see that version to Ack it.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.