[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/Intel: virtualize support for cpuid faulting



On Mon, Oct 17, 2016 at 9:39 AM, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
> On 17/10/16 17:28, Kyle Huey wrote:
>> On Mon, Oct 17, 2016 at 5:34 AM, Andrew Cooper
>> <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 14/10/16 20:36, Kyle Huey wrote:
>>>> On Fri, Oct 14, 2016 at 10:18 AM, Andrew Cooper
>>>> <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> On a slightly separate note, as you have just been a successful
>>>>> guinea-pig for XTF, how did you find it?  It is a very new (still
>>>>> somewhat in development) system but the project is looking to try and
>>>>> improve regression testing in this way, especially for new features.  I
>>>>> welcome any feedback.
>>> FWIW, I have just done some library improvements and rebased the test.
>>>
>>>> It's pretty slick.  Much better than what Linux has ;)
>>>>
>>>> I do think it's a bit confusing that xtf_has_fep is false on PV guests.
>>> Now you point it out, I can see how it would be confusing.  This is due
>>> to the history of FEP.
>>>
>>> The sequence `ud2; .ascii 'xen'; cpuid` has been around ages (it
>>> predates faulting and hardware with mask/override MSRs), and is used by
>>> PV guests to specifically request Xen's CPUID information, rather than
>>> getting the real hardware information.
>>>
>>> There is also an rdtscp variant for PV guests, used for virtual TSC modes.
>>>
>>> In Xen 4.5, I introduced the same prefix to HVM guests, but for
>>> arbitrary instructions.  This was for the express purpose of testing the
>>> x86 instruction emulator.
>>>
>>> As a result, CPUID in PV guests is the odd case out.
>>>
>>>> It might also be nice to (at least optionally) have xtf_assert(cond,
>>>> message) so instead of
>>>>
>>>> if ( cond )
>>>>     xtf_failure(message);
>>>>
>>>> you can write
>>>>
>>>> xtf_assert(!cond, message);
>>>>
>>>> A bonus of doing this is that the framework could actually count how
>>>> many checks were run.  So for the HVM tests (which don't run the FEP
>>>> bits) instead of getting "Test result: SKIP" you could say "Test
>>>> result: 9 PASS 1 SKIP" or something similar.
>>> Boot with "hvm_fep" on the command line and the tests should end up
>>> reporting success.
>> They do not, because the hvm_fep code calls vmx_cpuid_intercept (not
>> vmx_do_cpuid) so it skips the faulting check.  The reason I did this
>> in vmx_do_cpuid originally is that hvm_efer_valid also calls
>> vmx_cpuid_intercept and that should not fault.
>>
>> I could push the cpuid faulting code down into vmx_cpuid_intercept,
>> give it a non-void return value so it can tell its callees not to
>> advance the IP in this situation, and make hvm_efer_valid save, clear,
>> and restore the cpuid_fault flag on the vcpu to call
>> vmx_cpuid_intercept.  Though it's not immediately obvious to me that
>> hvm_efer_valid is always called with v == current.  Do you think it's
>> worth it for this testing code?
>
> This isn't just for testing code.  It also means that cpuid faulting
> support won't work with introspected domains, which can also end up
> emulating cpuid instructions because of restricted execute permissions
> on a page.
>
> The hvm_efer_valid() tangle can't be untangled at the moment; the use of
> vmx_cpuid_intercept() is deliberate to provide accurate behaviour with
> the handling on EFER_SCE.
>
> Your best bet here is to put a faulting check in hvmemul_cpuid() as well.

That's not quite what we want either, because hvmemul_cpuid will also
be called when clzero is emulated.

- Kyle

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.