[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: Control CR0 TS behavior using dev_na_ts_allowed

On 03/17/2014 07:18 AM, George Dunlap wrote:
> On 03/17/2014 02:05 PM, George Dunlap wrote:
>> On 03/17/2014 01:35 PM, Jan Beulich wrote:
>>>>>> On 17.03.14 at 13:42, George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>>> On Mon, Mar 17, 2014 at 8:38 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>>> On 17.03.14 at 04:30, Sarah Newman <srn@xxxxxxxxx> wrote:
>>>>> Not being convinced at all that this is the right approach (in
>>>>> particular it remains unclear how an affected guest should deal with
>>>>> running on a hypervisor not supporting the new interface)
>>>> It looks like the intention of this patch was that if the dom0
>>>> administrator enables the new option, then it will be on by default,
>>>> *but* the guest can disable the new behavior.  That way, if an admin
>>>> knows that she's running all PVOPS kernels (no "classic Xen" kernels),
>>>> she can enable it system-wide.  Older PVOPS kernels will behave
>>>> correctly (but a bit slowly), and newer PVOPS kernels will switch to
>>>> the PVABI behavior and reap the performance benefit.

The guest cannot enable or disable this behavior but it can detect the behavior 
using CPUID.  I
considered making the behavior configurable from within the guest, but did not 
find a clean way of
implementing it since any decision should really happen very early in the boot 
process.  Suggestions
on how to do this are welcome.

>>>> Newer PVOPS kernels running on older hypervisors will simply use the
>>>> PVABI behavior.
>>> But if that works correctly, then there's no hypervisor/tools
>>> change needed in the first place.
>> Yes, there's still a need to run *old* PVOPS kernels on *new* hypervisors.  
>> That (as I understand
>> it) is the point of this patch.

My assumption is that the accepted linux fix will be more slow or use memory 
less effectively in
order to integrate well with other x86 implementations. So I would like new 
kernels to be able to
detect that they can use clts/stts safely and only use the workaround if they 
can't.  This is why I
added a CPUID field that advertises nm_hardware_ts.

Given almost all of our customers run linux, my long term plan is turn this 
option on for everyone
by default and let individual users turn it off if they are running a classic 
kernel or a different
OS which is PVABI compliant.

> So we have old hypervisors, new hypervisors with this disabled, and new 
> hypervisors with this
> enabled.  New hypervisors with this disabled behave just like old 
> hypervisors.  And we have old
> pvops kernels, new pvops kernels, and "classic Xen" kernels.  And we have 
> "correctness" and
> "performance".  Then we have the following combinations:
> * Old hypervisor / New hypervisor w/ mode disabled:
>  - Old hypervisor, classic kernel: correct and fast.

>  - Old hypervisor, old pvops kernel: fast but buggy.

>  - Old hypervisor, new pvops kernel: correct and fast.
Likely not fast if eagerfpu is the solution instead of eager allocation or 
atomic allocation.

> * New hypervisor (w/ mode enabled):
>  - classic kernel: broken (since it's expecting PVABI TS behavior)
Broken, yes

>  - old pvops: correct but slow
Correct and as fast as it was, because its behavior will not change with 
regards to clts/stts.

>  - new pvops kernel: correct and fast (since it will opt-in to the faster 
Correct and as fast as it was.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.