|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC V4 5/5] xen: Handle resumed instruction based on previous mem_event reply
On 08/04/2014 05:33 PM, Jan Beulich wrote:
>>>> On 04.08.14 at 13:30, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> In a scenario where a page fault that triggered a mem_event occured,
>> p2m_mem_access_check() will now be able to either 1) emulate the
>> current instruction, or 2) emulate it, but don't allow it to perform
>> any writes.
>>
>> Changes since V1:
>> - Removed the 'skip' code which required computing the current
>> instruction length.
>> - Removed the set_ad_bits() code that attempted to modify the
>> 'accessed' and 'dirty' bits for instructions that the emulator
>> can't handle at the moment.
>>
>> Changes since V2:
>> - Moved the __vmread(EXIT_QUALIFICATION, &exit_qualification); code
>> in vmx.c, accessible via hvm_funcs.
>> - Incorporated changes by Andrew Cooper ("[PATCH 1/2] Xen/mem_event:
>> Validate the response vcpu_id before acting on it."
>>
>> Changes since V3:
>> - Collapsed verbose lines into a single "else if()".
>> - Changed an int to bool_t.
>> - Fixed a minor coding style issue.
>> - Now computing the first parameter to hvm_emulate_one_full()
>> (replaced an "if" with a single call).
>> - Added code comments about eip and gla reset (clarity issue).
>> - Removed duplicate code by Andrew Cooper (introduced in V2,
>> since committed).
>>
>> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>> ---
>> xen/arch/x86/domain.c | 3 ++
>> xen/arch/x86/hvm/vmx/vmx.c | 13 ++++++
>> xen/arch/x86/mm/p2m.c | 85
>> ++++++++++++++++++++++++++++++++++++++++
>> xen/include/asm-x86/domain.h | 9 +++++
>> xen/include/asm-x86/hvm/hvm.h | 2 +
>> xen/include/public/mem_event.h | 12 +++---
>> 6 files changed, 119 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index e896210..af9b213 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -407,6 +407,9 @@ int vcpu_initialise(struct vcpu *v)
>>
>> v->arch.flags = TF_kernel_mode;
>>
>> + /* By default, do not emulate */
>> + v->arch.mem_event.emulate_flags = 0;
>> +
>> rc = mapcache_vcpu_init(v);
>> if ( rc )
>> return rc;
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index c0e3d73..150fe9f 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -1698,6 +1698,18 @@ static void vmx_enable_intro_msr_interception(struct
>> domain *d)
>> }
>> }
>>
>> +static bool_t vmx_exited_by_pagefault(void)
>> +{
>> + unsigned long exit_qualification;
>> +
>> + __vmread(EXIT_QUALIFICATION, &exit_qualification);
>> +
>> + if ( (exit_qualification & EPT_GLA_FAULT) == 0 )
>> + return 0;
>> +
>> + return 1;
>> +}
>> +
>> static struct hvm_function_table __initdata vmx_function_table = {
>> .name = "VMX",
>> .cpu_up_prepare = vmx_cpu_up_prepare,
>> @@ -1757,6 +1769,7 @@ static struct hvm_function_table __initdata
>> vmx_function_table = {
>> .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
>> .hypervisor_cpuid_leaf = vmx_hypervisor_cpuid_leaf,
>> .enable_intro_msr_interception = vmx_enable_intro_msr_interception,
>> + .exited_by_pagefault = vmx_exited_by_pagefault,
>> };
>>
>> const struct hvm_function_table * __init start_vmx(void)
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index 069e869..da1bc2d 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1391,6 +1391,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, bool_t
>> gla_valid, unsigned long gla,
>> p2m_access_t p2ma;
>> mem_event_request_t *req;
>> int rc;
>> + unsigned long eip = guest_cpu_user_regs()->eip;
>>
>> /* First, handle rx2rw conversion automatically.
>> * These calls to p2m->set_entry() must succeed: we have the gfn
>> @@ -1443,6 +1444,36 @@ bool_t p2m_mem_access_check(paddr_t gpa, bool_t
>> gla_valid, unsigned long gla,
>> return 1;
>> }
>> }
>> + else if ( hvm_funcs.exited_by_pagefault &&
>> !hvm_funcs.exited_by_pagefault() ) /* don't send a mem_event */
>
> DYM
>
> else if ( !hvm_funcs.exited_by_pagefault ||
> !hvm_funcs.exited_by_pagefault() )
Well, no. That would mean that if hvm_funcs.exited_by_pagefault == 0
(which is the SVM case now), no mem_event will be sent (we'll just
emulate the current instruction). That would mean that in the SVM case
no mem_event will ever be sent and everything will be emulated.
With the original code, if hvm_funcs.exited_by_pagefault is not set,
i.e. in the SVM case, _all_ mem_events are being sent out (even those
that happened when exiting by nested pagefault). I'm not sure what the
status of SVM mem_event is at the moment, but it seemed the safer choice.
Sorry for the late reply, I've lost track of this question while
answering others.
Thanks,
Razvan Cojocaru
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |