[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 08/16] SVM: use generic instruction decoding
>>> On 29.09.16 at 21:24, <andrew.cooper3@xxxxxxxxxx> wrote: > On 28/09/16 09:13, Jan Beulich wrote: >> ... instead of custom handling. To facilitate this break out init code >> from _hvm_emulate_one() into the new hvm_emulate_init(), and make >> hvmemul_insn_fetch( globally available. > > ) > >> int __get_instruction_length_from_list(struct vcpu *v, >> const enum instruction_index *list, unsigned int list_count) >> { >> struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb; >> - unsigned int i, j, inst_len = 0; >> - enum instruction_index instr = 0; >> - u8 buf[MAX_INST_LEN]; >> - const u8 *opcode = NULL; >> - unsigned long fetch_addr, fetch_limit; >> - unsigned int fetch_len, max_len; >> + struct hvm_emulate_ctxt ctxt; >> + struct x86_emulate_state *state; >> + unsigned int inst_len, j, modrm_rm, modrm_reg; >> + int modrm_mod; >> > > Despite knowing how this works, it is still confusing to read. Do you > mind putting in a comment such as: > > /* In a debug build, always use x86_decode_insn() and compare with > hardware. */ Sure. >> for ( j = 0; j < list_count; j++ ) >> { >> - instr = list[j]; >> - opcode = opc_bytes[instr]; >> + enum instruction_index instr = list[j]; >> >> - for ( i = 0; (i < opcode[0]) && ((inst_len + i) < max_len); i++ ) >> + ASSERT(instr >= 0 && instr < ARRAY_SIZE(opc_tab)); > > This is another ASSERT() used as a bounds check, and will suffer a build > failure on clang. > > You need to use s/enum instruction_index/unsigned int/ to fix the build > issue. Oh, right. This predates us having become aware of that clang issue. > Can I also request the use of > > if ( instr >= ARRAY_SIZE(opc_tab) ) > { > ASSERT_UNREACHABLE(); > return 0; > } > > instead? Except that I prefer "break" over "return 0" here. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |