[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/PoD: tighten conditions for checking super page



>>> On 02.11.15 at 17:29, <george.dunlap@xxxxxxxxxx> wrote:
> On 30/10/15 17:39, Jan Beulich wrote:
>> Since calling the function isn't cheap, try to avoid the call when we
>> know up front it won't help; see the code comment for details on those
>> conditions.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> 
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -522,7 +522,6 @@ p2m_pod_decrease_reservation(struct doma
>>      if ( unlikely(d->is_dying) )
>>          goto out_unlock;
>>  
>> -recount:
>>      pod = nonpod = ram = 0;
>>  
>>      /* Figure out if we need to steal some freed memory for our cache */
>> @@ -562,15 +561,20 @@ recount:
>>          goto out_entry_check;
>>      }
>>  
>> -    /* Try to grab entire superpages if possible.  Since the common case is 
> for drivers
>> -     * to pass back singleton pages, see if we can take the whole page back 
> and mark the
>> -     * rest PoD. */
>> -    if ( steal_for_cache
>> -         && p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES-1)))
>> -    {
>> -        /* Since order may be arbitrary, we may have taken more or less
>> -         * than we were actually asked to; so just re-count from scratch */
>> -        goto recount;
>> +    /*
>> +     * Try to grab entire superpages if possible.  Since the common case is 
> for
>> +     * drivers to pass back singleton pages, see if we can take the whole 
> page
>> +     * back and mark the rest PoD.
>> +     * No need to do this though if
>> +     * - order >= SUPERPAGE_ORDER (the loop below will take care of this)
>> +     * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER)
>> +     */
>> +    if ( steal_for_cache && order < SUPERPAGE_ORDER && (ram >> order) &&
>> +         p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) )
>> +    {
>> +        pod += ram;
>> +        nonpod -= ram;
>> +        ram = 0;
> 
> +1 for the idea; a couple of comments:
> 
> * I think it would be clearer to use "(ram == 1 << order)" instead of
> "ram >> order".  I understand (ram >> order) will be non-zero only if
> ram == 1 << order, but why make people spend brain cycles trying to
> figure that out?

Because I originally thought it makes the CPU spend less cycles
on the calculation. But that I think about it again, I guess I was
wrong (I would have been right only when order > 0 and the
compiler would be able to prove this and it would actually make
use of the knowledge using the status flags from the shift
instead of doing a subsequent test to get those flags set).

So - yes, will do.

> * If we're going to assume that "ram >> order" implies "all the entries
> are ram", and furthermore that a positive return value implies "all ram
> was changed to pod", wouldn't it be better to do something like the
> following?
> 
>   pod = 1 << order
>   nonpod = ram = 0
> 
> This would be more clearly correct if we change the comparison to ram ==
> 1 << order.

Well, yes, I can do that too. Here I really tried to avoid establishing
an unnecessary dependency between the if() condition and the body
(i.e. with how it is, the condition could change quite a bit without the
calculations getting wrong, whereas with what you want the
restrictions would be more tight).

> * steal_for_cache may now be wrong.  I realize that since now ram == 0
> that all the subsequent "steal_for_cache" expressions will end up as
> "false" anyway, but leaving invariants in an invalid state is sort of
> asking for trouble.
> 
> I'd prefer you just update steal_for_cache; but if not, at least leave a
> comment there saying that it may be wrong and why it doesn't matter.

I'll see whether updating is reasonably straightforward.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.