[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06/10] x86/cpuid: Handle leaf 0x6 in guest_cpuid()



On 22/02/17 09:26, Jan Beulich wrote:
>>>> On 22.02.17 at 10:12, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 22/02/17 08:23, Andrew Cooper wrote:
>>> On 22/02/17 07:31, Jan Beulich wrote:
>>>>>>> On 21.02.17 at 18:40, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> On 21/02/17 17:25, Jan Beulich wrote:
>>>>>>>>> On 20.02.17 at 12:00, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>>>> The PV MSR handling logic as minimal support for some thermal/perf 
>>>>>>> operations
>>>>>>> from the hardware domain, so leak through the implemented subset of 
>>>>>>> features.
>>>>>> Does it make sense to continue to special case PV hwdom here?
>>>>> Being able to play with these MSRs will be actively wrong for HVM
>>>>> context.  It is already fairly wrong for PV context, as nothing prevents
>>>>> you being rescheduled across pcpus while in the middle of a read/write
>>>>> cycle on the MSRs.
>>>> So the MSRs in question are, afaics
>>>> - MSR_IA32_MPERF, MSR_IA32_APERF, MSR_IA32_PERF_CTL (all
>>>>   of which are is_cpufreq_controller() dependent)
>>>> - MSR_IA32_THERM_CONTROL, MSR_IA32_ENERGY_PERF_BIAS
>>>>   (both of which are is_pinned_vcpu() dependent)
>>>> For the latter your argument doesn't apply. For the former, I've
>>>> been wondering for a while whether we shouldn't do away with
>>>> "cpufreq=dom0-kernel".
>>> Hmm.  All good points.  If I can get away without leaking any of this,
>>> that would be ideal.  (Lets see what Linux thinks of such a setup.)
>> Linux seems fine without any of this leakage.
> But is that for a broad range of versions, or just the one you had
> to hand?

3.10 and 4.4 PVOps.  Looking at the 2.6.32-classic source, I can't see
anything which would be a problem.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.