[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 3/6] x86emul: conditionally clear BNDn for branches
>>> On 11.01.17 at 16:40, <andrew.cooper3@xxxxxxxxxx> wrote: > On 10/01/17 09:04, Jan Beulich wrote: >> @@ -1836,6 +1840,34 @@ static int inject_swint(enum x86_swint_t >> generate_exception(fault_type, error_code); >> } >> >> +static void adjust_bnd(struct x86_emulate_ctxt *ctxt, >> + const struct x86_emulate_ops *ops, enum vex_pfx pfx) >> +{ >> + uint64_t bndcfg; >> + int rc; >> + >> + if ( pfx == vex_f2 || !vcpu_has_mpx() ) >> + return; > > I'm sorry, but I am still going to argue over this. This needs to be a > host_and_vcpu check, because we are actually using real host > state/operations to perform the emulation. > > At the moment, given junk in the vcpu cpuid information, we could still > hit some assertions. Furthermore, this is the litmus test I use to > chose between the two form. I.e. "If the vcpu information has junk in > it, will Xen crash when it comes to execute this path?". > > At the point we think the vcpu information is strictly a subset of > hardware, we could turn the host_and_* side of the check into an > ASSERT(cpu_has_*) instead, but the difference between the two is still > semantic relevant. Okay, I think I finally understand your concern. Originally my intention was for read_bndcfgu() and xstate_set_init() to be effectively no-ops with !cpu_has_mpx. I see now that I have broken this at some point for the former, and the assertion you had asked me to add also broke it for the latter. So I see both options as viable: Do as you say and add a cpu_has_mpx check (which now is much easier than back when I wrote this patch, as the test harness'es cpu_has_* are now visible in the main emulator source file afaict), or convert the two functions back to how they were intended to behave originally. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |