|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 08/10] nEPT: handle invept instruction from L1 VMM
> >
> > +int nvmx_handle_invept(struct cpu_user_regs *regs) {
> > + struct vmx_inst_decoded decode;
> > + unsigned long eptp;
> > + u64 inv_type;
> > +
> > + if ( decode_vmx_inst(regs, &decode, &eptp, 0) != X86EMUL_OKAY )
> > + return X86EMUL_EXCEPTION;
>
> So in the overview you said you fixed this, but here it is again:
> There are more than the two X86EMUL_* values referenced above, and
> hence you can't imply that if it's not one, it's the other.
Do you mean X86EMUL_EXCEPTION can't be returned here ? I think
decode_vmx_inst handles the exception already, and the caller doesn't need to
do anything. Once the caller of nvmx_handle_invept get this return value, it
doesn't do RIP++, and just inject one exception instead in its return path.
> > +
> > + inv_type = reg_read(regs, decode.reg2);
> > +
> > + switch ( inv_type )
>
> There doesn't appear to be a second use of inv_type, and hence you can
>
> switch ( reg_read(regs, decode.reg2) )
>
> and remove the local variable.
Okay.
> > + {
> > + case INVEPT_SINGLE_CONTEXT:
> > + {
> > + struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
> > + if ( p2m )
> > + {
> > + p2m_flush(current, p2m);
>
> And similarly you said you fixed all the white space issues.
Very strange, and I will fix it. Thanks!
Xiantao
> Jan
>
> > + ept_sync_domain(p2m);
> > + }
> > + break;
> > + }
> > + case INVEPT_ALL_CONTEXT:
> > + p2m_flush_nestedp2m(current->domain);
> > + __invept(INVEPT_ALL_CONTEXT, 0, 0);
> > + break;
> > + default:
> > + vmreturn(regs, VMFAIL_INVALID);
> > + return X86EMUL_OKAY;
> > + }
> > + vmreturn(regs, VMSUCCEED);
> > + return X86EMUL_OKAY;
> > +}
> > +
> > +
> > #define __emul_value(enable1, default1) \
> > ((enable1 | default1) << 32 | (default1))
> >
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |