[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 10/30] xen/x86: Annotate VM applicability in featureset



On 12/02/16 17:05, Jan Beulich wrote:
>>>> On 05.02.16 at 14:42, <andrew.cooper3@xxxxxxxxxx> wrote:
>> Use attributes to specify whether a feature is applicable to be exposed to:
>>  1) All guests
>>  2) HVM guests
>>  3) HVM HAP guests
> No provisions for PV-only or shadow-only features?

No.  There are not currently any, and I would prefer to avoid
introducing any.

Of course, if a plausible usecase comes along, we can certainly make
changes and accommodate.

>
>> +#define X86_FEATURE_MTRR          ( 0*32+12) /*S  Memory Type Range 
>> Registers */
> Why is this being hidden from PV guests (namely Dom0)? Right now
> this is being cleared for PHV only.

It is unilaterally hidden from PV domains other than dom0.  We have no
handling for them in emulate_privileged_op(), which means dom0 can't use
them anyway.

>
>> +#define X86_FEATURE_VMXE          ( 1*32+ 5) /*S  Virtual Machine 
>> Extensions */
> Shouldn't this get a "nested-only" class?

I am not sure that would be appropriate.  On the Intel side, this bit is
the only option in cpuid; the VT-x features need MSR-levelling, which is
moderately far away on my TODO list.

Having said that, the AMD side has all nested features in cpuid.  I
guess this is more a problem for whomever steps up and makes nested virt
a properly supported option, but this is way off at this point.

> Also don't we currently require HAP for nested mode to work?

Experimentally, the p2m lock contention caused by Shadow and Nested virt
caused Xen to fall over very frequently with watchdog timeouts.

Having said that, nothing formal is written down one way or another, and
it is possible in limited scenarios to make nested virt work without
hap.  FWIW, I would be happy with a blanket "no nested virt without HAP"
statement for Xen.

>
>>  #define X86_FEATURE_OSXSAVE       ( 1*32+27) /*   OSXSAVE */
> Leaving this untouched warrants at least a comment in the commit
> message I would think.

The handling for the magic bits

>
>> +#define X86_FEATURE_RDTSCP        ( 2*32+27) /*S  RDTSCP */
> Hmm, I'm confused - on one hand we currently clear this bit for
> PV guests, but otoh do_invalid_op() emulates it.

Urgh yes - I had forgotten about this gem.  I lacked sufficient tuits to
untangle the swamp which is the vtsc subsystem.

Currently, the dynamic vtsc setting controls whether the RDTSCP feature
flag is visible.

>
>> +#define X86_FEATURE_LM            ( 2*32+29) /*A  Long Mode (x86-64) */
>> [...]
>> -#define X86_FEATURE_LAHF_LM       ( 3*32+ 0) /*   LAHF/SAHF in long mode */
>> +#define X86_FEATURE_LAHF_LM       ( 3*32+ 0) /*A  LAHF/SAHF in long mode */
> How do you intend to handle exposing these to 64-bit PV guests,
> but not to 32-bit ones?

At the end of this series, the deep dependency logic used by the
toolstack, and some remaining dynamic checks in the intercept hooks.

By the end of my plans for full cpuid handling, Xen will create a policy
on each domaincreate, and keep it consistent with calls such as set_width.

>
>>  #define X86_FEATURE_EXTAPIC       ( 3*32+ 3) /*   Extended APIC space */
> This currently is left untouched for HVM guests, and gets cleared
> only when !cpu_has_apic (i.e. effectively never) for PV ones.

There is no HVM support for handling a guest trying to use EXTAPIC, and
PV guests don't get to play with the hardware APIC anyway.  As far as I
can tell, it has always been wrong to ever expose this feature.

>
>>  #define X86_FEATURE_MWAITX        ( 3*32+29) /*   MWAIT extension 
>> (MONITORX/MWAITX) */
> Why not exposed to HVM (also for _MWAIT as I now notice)?

Because that is a good chunk of extra work to support.  We would need to
use 4K monitor widths, and extra p2m handling.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.