[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] x86: suppress XPTI-related TLB flushes when possible



>>> On 03.04.19 at 20:52, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 13/03/2019 12:38, Jan Beulich wrote:
>> When there's no XPTI-enabled PV domain at all, there's no need to issue
>> respective TLB flushes. Hardwire opt_xpti_* to false when !PV, and
>> record the creation of PV domains by bumping opt_xpti_* accordingly.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> TBD: The hardwiring to false could be extended to opt_pv_l1tf_* and (for
>>      !HVM) opt_l1d_flush as well.
> 
> For what purpose?  opt_pv_l1tf_* is only read inside a CONFIG_PV section
> (despite how pv_l1tf_domain_init() is laid out - there is an outer ifdef
> as well),

Oh, right, the benefit would be smaller. Still I think a PV-less Xen would
better report the command line option as unrecognized.

> while opt_l1d_flush influences the contents of the guests MSR
> load list, which is inherently VT-x only.

Along the above lines, a HVM-less Xen would imo better report
the bogus use of option.

>> --- a/xen/arch/x86/pv/domain.c
>> +++ b/xen/arch/x86/pv/domain.c
>> @@ -270,6 +270,9 @@ void pv_domain_destroy(struct domain *d)
>>      destroy_perdomain_mapping(d, GDT_LDT_VIRT_START,
>>                                GDT_LDT_MBYTES << (20 - PAGE_SHIFT));
>>  
>> +    opt_xpti_hwdom -= IS_ENABLED(CONFIG_LATE_HWDOM) &&
>> +                      !d->domain_id && opt_xpti_hwdom;
>> +
>>      XFREE(d->arch.pv.cpuidmasks);
>>  
>>      FREE_XENHEAP_PAGE(d->arch.pv.gdt_ldt_l1tab);
>> @@ -308,7 +311,16 @@ int pv_domain_initialise(struct domain *
>>      /* 64-bit PV guest by default. */
>>      d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0;
>>  
>> -    d->arch.pv.xpti = is_hardware_domain(d) ? opt_xpti_hwdom : 
>> opt_xpti_domu;
>> +    if ( is_hardware_domain(d) && opt_xpti_hwdom )
>> +    {
>> +        d->arch.pv.xpti = true;
>> +        ++opt_xpti_hwdom;
>> +    }
>> +    if ( !is_hardware_domain(d) && opt_xpti_domu )
>> +    {
>> +        d->arch.pv.xpti = true;
>> +        opt_xpti_domu = 2;
> 
> This logic is asymetric.  We will retain TLB flushing after the final
> domu has shut down.

Well, yes. I didn't want to introduce full counting logic, not the least
because its management would be non-trivial: Once the last PV
DomU has been destroyed, we'd have to wait until the next full
flush in order to be able to decrement the counter, as we may not
bypass earlier flushes.

In fact I now can't figure anymore why I thought this same
argumentation would not also apply to Dom0; the goal of course
was that at least in the transient-early-boot-PV-Dom0 case we'd
be able to go back to non-flushing mode. But I probably should
drop this - the late-hwdom case is rather exotic anyway.

> I'm also not sure about the hwdom logic.  There is guaranteed to be
> exactly one,

(except aiui for a brief period of time, when the late one is
starting, and Dom0 hasn't been destroyed yet)

> and Xen will shut down when it goes offline, but it may not
> be a PV guest.  opt_xpti_hwdom should be unconditionally 2 on this path
> (I think).

As per above I guess I should make it 2 here, but also drop the
decrement.

>> --- a/xen/include/asm-x86/spec_ctrl.h
>> +++ b/xen/include/asm-x86/spec_ctrl.h
>> @@ -42,7 +42,12 @@ extern bool bsp_delay_spec_ctrl;
>>  extern uint8_t default_xen_spec_ctrl;
>>  extern uint8_t default_spec_ctrl_flags;
>>  
>> +#ifdef CONFIG_PV
>>  extern int8_t opt_xpti_hwdom, opt_xpti_domu;
>> +#else
>> +# define opt_xpti_hwdom false
>> +# define opt_xpti_domu false
>> +#endif
> 
> These now have more complicated interaction with flushing.  At the
> absolute minimum, it needs a sentence or two about the new semantics.

Hmm, would their effect on flushing really belong next to the
declarations? But yes, I'll see about adding something.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.