[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/9] x86: infrastructure to allow converting certain indirect calls to direct ones
>>> On 21.09.18 at 15:48, <wei.liu2@xxxxxxxxxx> wrote: > On Fri, Sep 21, 2018 at 05:47:54AM -0600, Jan Beulich wrote: >> >>> On 21.09.18 at 12:49, <wei.liu2@xxxxxxxxxx> wrote: >> > On Tue, Sep 11, 2018 at 07:32:04AM -0600, Jan Beulich wrote: >> >> @@ -218,6 +219,13 @@ void init_or_livepatch apply_alternative >> > >> > I think you need to fix the comment before this if statement. At the >> > very least you're now using two ->priv to make decision on patching. >> >> I've been considering this, but even a very close look didn't turn up >> anything I could do to this comment to improve it. Suggestions >> welcome. > > Just remove the sentence about using single ->priv field? That would go too far. But I'll make it "for some of our patching decisions". >> >> @@ -236,20 +244,74 @@ void init_or_livepatch apply_alternative >> >> continue; >> >> } >> >> >> >> - base->priv = 1; >> >> - >> >> memcpy(buf, repl, a->repl_len); >> >> >> >> /* 0xe8/0xe9 are relative branches; fix the offset. */ >> >> if ( a->repl_len >= 5 && (*buf & 0xfe) == 0xe8 ) >> >> - *(int32_t *)(buf + 1) += repl - orig; >> >> + { >> >> + /* >> >> + * Detect the special case of indirect-to-direct branch >> >> patching: >> >> + * - replacement is a direct CALL/JMP (opcodes 0xE8/0xE9; >> >> already >> >> + * checked above), >> >> + * - replacement's displacement is -5 (pointing back at the >> >> very >> >> + * insn, which makes no sense in a real replacement insn), >> >> + * - original is an indirect CALL/JMP (opcodes 0xFF/2 or >> >> 0xFF/4) >> >> + * using RIP-relative addressing. >> >> + * Some function targets may not be available when we come >> >> here >> >> + * the first time. Defer patching of those until the >> >> post-presmp- >> >> + * initcalls re-invocation. If at that point the target >> >> pointer is >> >> + * still NULL, insert "UD2; UD0" (for ease of recognition) >> >> instead >> >> + * of CALL/JMP. >> >> + */ >> >> + if ( a->cpuid == X86_FEATURE_ALWAYS && >> >> + *(int32_t *)(buf + 1) == -5 && >> >> + a->orig_len >= 6 && >> >> + orig[0] == 0xff && >> >> + orig[1] == (*buf & 1 ? 0x25 : 0x15) ) >> >> + { >> >> + long disp = *(int32_t *)(orig + 2); >> >> + const uint8_t *dest = *(void **)(orig + 6 + disp); >> >> + >> >> + if ( dest ) >> >> + { >> >> + disp = dest - (orig + 5); >> >> + ASSERT(disp == (int32_t)disp); >> >> + *(int32_t *)(buf + 1) = disp; >> >> + } >> >> + else if ( force ) >> >> + { >> >> + buf[0] = 0x0f; >> >> + buf[1] = 0x0b; >> >> + buf[2] = 0x0f; >> >> + buf[3] = 0xff; >> >> + buf[4] = 0xff; >> > >> > I think these are opcodes for "UD2; UD0". Please add a comment for them. >> > Having to go through SDM to figure out what they are isn't nice. >> >> Well, I'm saying so in the relatively big comment ahead of this block of >> code. I don't want to say the same thing twice. > > It is all fine when one is rather familiar with the code and x86-ism, > but it is rather difficult for a casual reader when you refer to > "target" in comment but "dest" in code. Would "function pointers" / "branch destinations" (or both) in the comment be better? > Lacking comment of what "force" means also doesn't help. > >> >> > At this point I also think the name "force" is not very good. What/who >> > is forced here? Why not use a more descriptive name like "post_init" or >> > "system_active"? >> >> _Patching_ is being forced here, i.e. even if we still can't find a non-NULL >> pointer, we still patch the site. I'm certainly open for suggestions, but >> I don't really like either of the two suggestions you make any better than >> the current "force". The next best option I had been thinking about back >> then was to pass in a number, to identify the stage / phase / pass we're in. > > I had to reverse-engineer when force is supposed to be true. It would > help a lot if you add a comment regarding "force" at the beginning of > the function. Will do. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |