[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 2/2] x86/emulate: Send vm_event from emulate
On 22.05.2019 12:56, Jan Beulich wrote: >>>> On 20.05.19 at 14:55, <aisaila@xxxxxxxxxxxxxxx> wrote: >> This patch aims to have mem access vm events sent from the emulator. >> This is useful in the case of emulated instructions that cause >> page-walks on access protected pages. >> >> We use hvmemul_map_linear_addr() ro intercept r/w access and >> hvmemul_insn_fetch() to intercept exec access. > > I'm afraid I don't understand this sentence. Or wait - is this a > simple typo, and you mean "to" instead of "ro"? Yes that is a typo it was meant to be a "to". > >> First we try to send a vm event and if the event is sent then emulation >> returns X86EMUL_ACCESS_EXCEPTION. If the event is not sent then the >> emulation goes on as expected. > > Perhaps it's obvious for a vm-event person why successful sending > of an event is to result in X86EMUL_ACCESS_EXCEPTION, but it's not > to me, despite having looked at prior versions. Can this (odd at the > first glance) behavior please be briefly explained here? If the event was successfully sent then the emulation has to stop and return. > >> --- a/xen/arch/x86/hvm/emulate.c >> +++ b/xen/arch/x86/hvm/emulate.c >> @@ -15,6 +15,7 @@ >> #include <xen/paging.h> >> #include <xen/trace.h> >> #include <xen/vm_event.h> >> +#include <xen/monitor.h> >> #include <asm/event.h> >> #include <asm/i387.h> >> #include <asm/xstate.h> >> @@ -26,6 +27,7 @@ >> #include <asm/hvm/support.h> >> #include <asm/hvm/svm/svm.h> >> #include <asm/vm_event.h> >> +#include <asm/altp2m.h> > > In both cases please try to insert at least half way alphabetically > (I didn't check if the directives are fully sorted already), rather > than blindly at the end. Ok, I will correct that. > >> @@ -619,6 +621,68 @@ static int hvmemul_linear_to_phys( >> return X86EMUL_OKAY; >> } >> >> +static bool hvmemul_send_vm_event(unsigned long gla, >> + uint32_t pfec, unsigned int bytes, >> + struct hvm_emulate_ctxt ctxt) >> +{ >> + xenmem_access_t access; >> + vm_event_request_t req = {}; >> + gfn_t gfn; >> + paddr_t gpa; >> + unsigned long reps = 1; >> + int rc; >> + >> + if ( !ctxt.send_event || !pfec ) > > Why the !pfec part of the condition? Because it is used to check the type of access violation and if it is 0 then we do not want to call get_mem_access or get the gpa, it is clear that there will be no violation. > >> + return false; >> + >> + rc = hvmemul_linear_to_phys(gla, &gpa, bytes, &reps, pfec, &ctxt); > > As said before - I don't think it's a good idea to do the page walk > twice: This and the pre-existing one can easily return different > results. I do this just to get the gpa. If there is another way I will gladly use it. > > Additionally, as also said before (I think), the function may raise > #PF, which you don't seem to deal with despite discarding the > X86EMUL_EXCEPTION return value ... > >> + if ( rc != X86EMUL_OKAY ) >> + return false; > > ... here. > >> + gfn = gaddr_to_gfn(gpa); >> + >> + if ( p2m_get_mem_access(current->domain, gfn, &access, >> + altp2m_vcpu_idx(current)) != 0 ) >> + return false; >> + >> + switch ( access ) { >> + case XENMEM_access_x: >> + case XENMEM_access_rx: >> + if ( pfec & PFEC_write_access ) >> + req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W; >> + break; >> + >> + case XENMEM_access_w: >> + case XENMEM_access_rw: >> + if ( pfec & PFEC_insn_fetch ) >> + req.u.mem_access.flags = MEM_ACCESS_X; >> + break; >> + >> + case XENMEM_access_r: >> + case XENMEM_access_n: >> + if ( pfec & PFEC_write_access ) >> + req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W; >> + if ( pfec & PFEC_insn_fetch ) >> + req.u.mem_access.flags |= MEM_ACCESS_X; >> + break; >> + >> + default: >> + return false; >> + } > > Aren't you looking at the leaf page here, rather than at any of the > involved page tables? Or am I misunderstanding the description > saying "page-walks on access protected pages"? We want to ignore access write for the page tables and only fire a vm_event for "regular" pages possibly hit by the actual instruction that has also happened to trigger the A/D write(s). So we don't want to send out vm_events for written-to page tables at all. > >> @@ -636,6 +700,7 @@ static void *hvmemul_map_linear_addr( >> unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) - >> (linear >> PAGE_SHIFT) + 1; >> unsigned int i; >> + gfn_t gfn; >> >> /* >> * mfn points to the next free slot. All used slots have a page >> reference >> @@ -674,7 +739,7 @@ static void *hvmemul_map_linear_addr( >> ASSERT(mfn_x(*mfn) == 0); >> >> res = hvm_translate_get_page(curr, addr, true, pfec, >> - &pfinfo, &page, NULL, &p2mt); >> + &pfinfo, &page, &gfn, &p2mt); >> >> switch ( res ) >> { > > Are these two hunks leftovers? You don't use "gfn" anywhere. Yes, there is no need for the gfn any more. > >> @@ -1248,7 +1318,21 @@ int hvmemul_insn_fetch( >> container_of(ctxt, struct hvm_emulate_ctxt, ctxt); >> /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */ >> uint8_t insn_off = offset - hvmemul_ctxt->insn_buf_eip; >> + uint32_t pfec = PFEC_page_present | PFEC_insn_fetch; >> + unsigned long addr, reps = 1; >> + int rc = 0; >> + >> + rc = hvmemul_virtual_to_linear( >> + seg, offset, bytes, &reps, hvm_access_insn_fetch, hvmemul_ctxt, >> &addr); >> + >> + if ( rc != X86EMUL_OKAY || !bytes ) >> + return rc; >> + >> + if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) >> + pfec |= PFEC_user_mode; >> >> + if ( hvmemul_send_vm_event(addr, pfec, bytes, *hvmemul_ctxt) ) >> + return X86EMUL_ACCESS_EXCEPTION; >> /* >> * Fall back if requested bytes are not in the prefetch cache. >> * But always perform the (fake) read when bytes == 0. > > Despite what was said before you're still doing things a 2nd time > here just because of hvmemul_send_vm_event()'s needs, even > if that function ends up bailing right away. I don't understand what things are done 2 times. Can you please explain? > > Also please don't lose the blank line ahead of the comment you > add code ahead of. > >> --- a/xen/arch/x86/x86_emulate/x86_emulate.h >> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h >> @@ -162,6 +162,8 @@ struct x86_emul_fpu_aux { >> #define X86EMUL_UNRECOGNIZED X86EMUL_UNIMPLEMENTED >> /* (cmpxchg accessor): CMPXCHG failed. */ >> #define X86EMUL_CMPXCHG_FAILED 7 >> +/* Emulator tried to access a protected page. */ >> +#define X86EMUL_ACCESS_EXCEPTION 8 > > This still doesn't make clear what the difference is to > X86EMUL_EXCEPTION. We need a return that has no side effects. Alex _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |