[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when possible



On 19.05.2020 11:15, Roger Pau Monné wrote:
> On Tue, May 19, 2020 at 09:55:38AM +0200, Jan Beulich wrote:
>> On 18.05.2020 19:09, Roger Pau Monné wrote:
>>> On Wed, Sep 25, 2019 at 05:23:11PM +0200, Jan Beulich wrote:
>>>> @@ -310,7 +313,16 @@ int pv_domain_initialise(struct domain *
>>>>      /* 64-bit PV guest by default. */
>>>>      d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0;
>>>>  
>>>> -    d->arch.pv.xpti = is_hardware_domain(d) ? opt_xpti_hwdom : 
>>>> opt_xpti_domu;
>>>> +    if ( is_hardware_domain(d) && opt_xpti_hwdom )
>>>> +    {
>>>> +        d->arch.pv.xpti = true;
>>>> +        ++opt_xpti_hwdom;
>>>> +    }
>>>> +    if ( !is_hardware_domain(d) && opt_xpti_domu )
>>>> +    {
>>>> +        d->arch.pv.xpti = true;
>>>> +        opt_xpti_domu = 2;
>>>
>>> I wonder whether a store fence is needed here in order to guarantee
>>> that opt_xpti_domu is visible to flush_area_local before proceeding
>>> any further with domain creation.
>>
>> The changed behavior of flush_area_local() becomes relevant only
>> once the new domain runs. This being x86 code, the write can't
>> remain invisible for longer than the very latest when the function
>> returns, as the store can't be deferred past that (in reality it
>> can't be deferred even until after the next [real] function call
>> or the next barrier()). And due to x86'es cache coherent nature
>> (for WB memory) the moment the store insn completes the new value
>> is visible to all other CPUs.
> 
> Yes, I think it's fine because this is x86 specific code. A comment
> in that regard might be nice, but I'm not going to make this a strong
> request.
> 
> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks.

> I also think that turning opt_xpti_domu into a proper atomic and
> increasing/decreasing (maybe a cmpxg would be needed) upon PV domain
> creation/destruction should be able to accurately keep track of PV
> domUs and hence could be used to further reduce the flushes when no PV
> domains are running?

Possibly, which would take care of (c) in the reasons the commit
message gives. (a) and in particular (b) would then still need
addressing.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.