[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 28/31] xen/x86: Context switch all levelling state in context_switch()



On 22/01/16 14:31, Jan Beulich wrote:
>>>> On 22.01.16 at 15:19, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 22/01/16 09:52, Jan Beulich wrote:
>>>>>> On 16.12.15 at 22:24, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> @@ -145,6 +145,13 @@ void intel_ctxt_switch_levelling(const struct domain 
>>>> *nextd)
>>>>    struct cpumasks *these_masks = &this_cpu(cpumasks);
>>>>    const struct cpumasks *masks = &cpumask_defaults;
>>>>  
>>>> +  if (cpu_has_cpuid_faulting) {
>>>> +          set_cpuid_faulting(nextd && is_pv_domain(nextd) &&
>>>> +                             !is_control_domain(nextd) &&
>>>> +                             !is_hardware_domain(nextd));
>>>> +          return;
>>>> +  }
>>> Considering that you don't even probe the masking MSRs, this seems
>>> inconsistent with your "always level the entire system" choice.
>> In the case that faulting is available, we never want to touch masking. 
>> Faulting is newer and strictly superior to masking.
>>
>> As documented, there is no hardware which support both.  (In reality,
>> there is one SKU of IvyBridge CPUs which experimentally have both.)
>>
>>
>> The fact that dom0 and the hardware domain are bypassed is a bug IMO. 
> And we appear to disagree here. I'd rather see the rest of the
> series match this current behavior.

I am planning to fix it, but it is the same quantity of work again, on
top of this series.  I am deliberately not conflating all of the cpuid
related fixes into one series, because it is simply too much work to do
in one go.

Dom0 still gets its "feature levelled" system via emulated cpuid, just
as it does at the moment.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.