[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
On Wed, 2018-09-12 at 11:23 +0100, Andrew Cooper wrote: > On 12/09/18 10:46, Sergey Dyasli wrote: > > On Wed, 2018-09-12 at 10:12 +0100, Andrew Cooper wrote: > > > On 12/09/18 09:29, Sergey Dyasli wrote: > > > > On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote: > > > > > Rename them to guest_{rd,wr}msr_xen() for consistency, and because > > > > > the _regs > > > > > suffix isn't very appropriate. > > > > > > > > > > Update them to take a vcpu pointer rather than presuming that they > > > > > act on > > > > > current, and switch to using X86EMUL_* return values. > > > > > > > > > > Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > > > > > --- > > > > > CC: Jan Beulich <JBeulich@xxxxxxxx> > > > > > CC: Wei Liu <wei.liu2@xxxxxxxxxx> > > > > > CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> > > > > > CC: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> > > > > > > > > > > v3: > > > > > * Clean up after splitting the series. > > > > > --- > > > > > xen/arch/x86/msr.c | 6 ++---- > > > > > xen/arch/x86/traps.c | 29 +++++++++++++---------------- > > > > > xen/include/asm-x86/processor.h | 4 ++-- > > > > > 3 files changed, 17 insertions(+), 22 deletions(-) > > > > > > > > > > diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c > > > > > index cf0dc27..8f02a89 100644 > > > > > --- a/xen/arch/x86/msr.c > > > > > +++ b/xen/arch/x86/msr.c > > > > > @@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t > > > > > msr, uint64_t *val) > > > > > > > > > > /* Fallthrough. */ > > > > > case 0x40000200 ... 0x400002ff: > > > > > - ret = (rdmsr_hypervisor_regs(msr, val) > > > > > - ? X86EMUL_OKAY : X86EMUL_EXCEPTION); > > > > > + ret = guest_rdmsr_xen(v, msr, val); > > > > > break; > > > > > > > > > > default: > > > > > @@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, > > > > > uint64_t val) > > > > > > > > > > /* Fallthrough. */ > > > > > case 0x40000200 ... 0x400002ff: > > > > > - ret = (wrmsr_hypervisor_regs(msr, val) == 1 > > > > > - ? X86EMUL_OKAY : X86EMUL_EXCEPTION); > > > > > + ret = guest_wrmsr_xen(v, msr, val); > > > > > break; > > > > > > > > > > default: > > > > > diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c > > > > > index 7c17806..3988753 100644 > > > > > --- a/xen/arch/x86/traps.c > > > > > +++ b/xen/arch/x86/traps.c > > > > > @@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs) > > > > > trapnr, trapstr(trapnr), regs->error_code); > > > > > } > > > > > > > > > > -/* Returns 0 if not handled, and non-0 for success. */ > > > > > -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val) > > > > > +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t > > > > > *val) > > > > > { > > > > > - struct domain *d = current->domain; > > > > > + const struct domain *d = v->domain; > > > > > /* Optionally shift out of the way of Viridian architectural > > > > > MSRs. */ > > > > > uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000; > > > > > > > > > > switch ( idx - base ) > > > > > { > > > > > case 0: /* Write hypercall page MSR. Read as zero. */ > > > > > - { > > > > > *val = 0; > > > > > - return 1; > > > > > - } > > > > > + return X86EMUL_OKAY; > > > > > } > > > > > > > > > > - return 0; > > > > > + return X86EMUL_EXCEPTION; > > > > > } > > > > > > > > > > -/* Returns 1 if handled, 0 if not and -Exx for error. */ > > > > > -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val) > > > > > +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val) > > > > > { > > > > > - struct domain *d = current->domain; > > > > > + struct domain *d = v->domain; > > > > > /* Optionally shift out of the way of Viridian architectural > > > > > MSRs. */ > > > > > uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000; > > > > > > > > > > @@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t > > > > > val) > > > > > gdprintk(XENLOG_WARNING, > > > > > "wrmsr hypercall page index %#x unsupported\n", > > > > > page_index); > > > > > - return 0; > > > > > + return X86EMUL_EXCEPTION; > > > > > } > > > > > > > > > > page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC); > > > > > @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, > > > > > uint64_t val) > > > > > if ( p2m_is_paging(t) ) > > > > > { > > > > > p2m_mem_paging_populate(d, gmfn); > > > > > - return -ERESTART; > > > > > + return X86EMUL_RETRY; > > > > > > > > Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But > > > > with this patch, X86EMUL_RETRY will actually be returned. I don't think > > > > that callers can handle this situation. > > > > > > > > E.g. the code from vmx_vmexit_handler(): > > > > > > > > case EXIT_REASON_MSR_WRITE: > > > > switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) ) > > > > { > > > > case X86EMUL_OKAY: > > > > update_guest_eip(); /* Safe: WRMSR */ > > > > break; > > > > > > > > case X86EMUL_EXCEPTION: > > > > hvm_inject_hw_exception(TRAP_gp_fault, 0); > > > > break; > > > > } > > > > break; > > > > > > Hmm lovely, so it was broken before, but should be correct now. > > > > > > RETRY has caused an entry to go onto the paging ring, which will pause > > > the vcpu until a reply occurs, after which we will re-enter the guest > > > without having moved RIP forwards, re-execute the wrmsr instruction, and > > > this time succeed because the frame has been paged in. > > > > Actually, the current VMX/SVM (but not PV) code does: > > > > switch ( wrmsr_hypervisor_regs(msr, msr_content) ) > > { > > case -ERESTART: > > return X86EMUL_RETRY; > > > > This code is removed in 1/3 patch but I wasn't CCed. > > Ah right, in which case I need to temporarily transplant this switch > into patch 1. Given its only the PV side which is then broken, I can > probably see about doing a bugfix for that. With this being rebased on top of v4 1/3: Reviewed-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> -- Thanks, Sergey _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |