[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 06/12] VMX: add VMFUNC leaf 0 (EPTP switching) to emulator.



>>> On 24.06.15 at 22:29, <edmund.h.white@xxxxxxxxx> wrote:
> On 06/24/2015 05:47 AM, Andrew Cooper wrote:
>>> +    case EXIT_REASON_VMFUNC:
>>> +        if ( vmx_vmfunc_intercept(regs) == X86EMUL_OKAY )
>> 
>> This is currently an unconditional failure, and I don't see subsequent
>> patches which alter vmx_vmfunc_intercept().  Shouldn't
>> vmx_vmfunc_intercept() switch on eax and optionally call
>> p2m_switch_vcpu_altp2m_by_id()?
> 
> If the VMFUNC instruction was valid, the hardware would have executed it.
> The only time a VMFUNC exit occurs is if the hardware supports VMFUNC
> and the hypervisor has enabled it, but the VMFUNC instruction is
> invalid in some way and can't be executed (because EAX != 0, for example).
> 
> There are only two choices: crash the domain or inject #UD (which is the
> closest analogue to what happens in the absence of a hypervisor and will
> probably crash the OS in the domain). I chose the latter in the code I
> originally wrote; Ravi chose the former in his patch. I don't have a
> strong opinion either way, but I think these are the only two choices.

Injecting an exception should always be preferred, as that gives the
guest at least a theoretical chance of recovering. The closer to real
hardware, the better. I.e. if hardware without a hypervisor gives
#UD here, so should the emulation do. (Still I admit that the case is
somewhat fuzzy, as the instruction specifically exists to be used
under a hypervisor. But #UD being raised for EAX >= 64 makes it a
good candidate for smaller but invalid EAX values too imo.)

> I hope this answers Jan's question in another email on the same subject.

It does, thanks.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.