[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.8 2/2] x86/traps: Don't call hvm_hypervisor_cpuid_leaf() for PV guests



On 14/11/16 11:38, Jan Beulich wrote:
>>>> On 14.11.16 at 12:01, <andrew.cooper3@xxxxxxxxxx> wrote:
>> Luckily, hvm_hypervisor_cpuid_leaf() and vmx_hypervisor_cpuid_leaf() are safe
>> to execute in the context of a PV guest, but HVM-specific feature flags
>> shouldn't be visible to PV guests.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> albeit ...
>
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -928,6 +928,11 @@ int cpuid_hypervisor_leaves( uint32_t idx, uint32_t 
>> sub_idx,
>>          break;
>>  
>>      case 4:
>> +        if ( !has_hvm_container_domain(currd) )
>> +        {
>> +            *eax = *ebx = *ecx = *edx = 0;
>> +            break;
>> +        }
>>          hvm_hypervisor_cpuid_leaf(sub_idx, eax, ebx, ecx, edx);
>>          break;
> ... this being the last leaf, wouldn't we better limit the number of
> leaves (reported in leaf 0) to 3 for PV?

I considered this, but decided not to.

The current max leaf handling is fragile, owing to some dubious control
from the toolstack, and the existence of XEN_CPUID_MAX_NUM_LEAVES in the
public API is absolutely broken.

I am going to need to rework this all anyway, and its not clear whether
we can/should report less than 4 leaves to PV guests.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.