[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ping: Re: [PATCH] x86: correct vCPU dirty CPU handling


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Tue, 22 May 2018 14:38:18 +0200
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Manuel Bouyer <bouyer@xxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 22 May 2018 12:38:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 22/05/18 14:29, Andrew Cooper wrote:
> On 15/05/18 09:25, Jan Beulich wrote:
>>>>> On 26.04.18 at 12:52, <JBeulich@xxxxxxxx> wrote:
>>>>>> On 26.04.18 at 11:51, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 26/04/18 10:41, Jan Beulich wrote:
>>>>> --- a/xen/arch/x86/mm.c
>>>>> +++ b/xen/arch/x86/mm.c
>>>>> @@ -1202,11 +1202,23 @@ void put_page_from_l1e(l1_pgentry_t l1e,
>>>>>               unlikely(((page->u.inuse.type_info & PGT_count_mask) != 0)) 
>>>>> &&
>>>>>               (l1e_owner == pg_owner) )
>>>>>          {
>>>>> +            cpumask_t *mask = this_cpu(scratch_cpumask);
>>>>> +
>>>>> +            cpumask_clear(mask);
>>>>> +
>>>>>              for_each_vcpu ( pg_owner, v )
>>>>>              {
>>>>> -                if ( pv_destroy_ldt(v) )
>>>>> -                    flush_tlb_mask(cpumask_of(v->dirty_cpu));
>>>>> +                unsigned int cpu;
>>>>> +
>>>>> +                if ( !pv_destroy_ldt(v) )
>>>>> +                    continue;
>>>>> +                cpu = read_atomic(&v->dirty_cpu);
>>>>> +                if ( is_vcpu_dirty_cpu(cpu) )
>>>>> +                    __cpumask_set_cpu(cpu, mask);
>>>>>              }
>>>>> +
>>>>> +            if ( !cpumask_empty(mask) )
>>>>> +                flush_tlb_mask(mask);
>>>> Thinking about this, what is wrong with:
>>>>
>>>> bool flush;
>>>>
>>>> for_each_vcpu ( pg_owner, v )
>>>>     if ( pv_destroy_ldt(v) )
>>>>         flush = true;
>>>>
>>>> if ( flush )
>>>>    flush_tlb_mask(pg_owner->dirty_cpumask);
>>>>
>>>> This is far less complicated cpumask handling.  As the loop may be long,
>>>> it avoids flushing pcpus which have subsequently switched away from
>>>> pg_owner context.  It also avoids all playing with v->dirty_cpu.
>>> That would look to be correct, but I'm not sure it would be an improvement:
>>> While it may avoid flushing some CPUs, it may then do extra flushes on
>>> others (which another vCPU of the domain has been switched to). Plus it
>>> would flush even those CPUs where pv_destroy_ldt() has returned false,
>>> as long as the function returned true at least once.
>> Ping?
> 
> I'm not sure it is worth trying to optimise this code.  I've got a patch
> for 4.12 to leave it compiled out by default.
> 
> Therefore, Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> on the
> original patch.
> 

You can add my

Release-acked-by: Juergen Gross <jgross@xxxxxxxx>


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.