[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86emul/fuzz: add rudimentary limit checking
On 05/07/17 10:57, Jan Beulich wrote: > fuzz_insn_fetch() is the only data access helper where it is possible > to see offsets larger than 4Gb in 16- or 32-bit modes, as we leave the > incoming rIP untouched in the emulator itself. The check is needed here > as otherwise, after successfully fetching insn bytes, we may end up > zero-extending EIP son after complete_insn, which collides with the soon > X86EMUL_EXCEPTION-conditional respective ASSERT() in > x86_emulate_wrapper(). (NB: put_rep_prefix() is what allows > complete_insn to be reached with rc set to other than X86EMUL_OKAY or > X86EMUL_DONE. See also commit 53f87c03b4 ["x86emul: generalize > exception handling for rep_* hooks"].) > > Add assert()-s for all other (data) access routines, as effective > address generation in the emulator ought to guarantee in-range values. > > Reported-by: George Dunlap <george.dunlap@xxxxxxxxxx> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > --- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c > +++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c > @@ -139,7 +139,10 @@ static int fuzz_read( > struct x86_emulate_ctxt *ctxt) > { > /* Reads expected for all user and system segments. */ > - assert(is_x86_user_segment(seg) || is_x86_system_segment(seg)); > + if ( is_x86_user_segment(seg) ) > + assert(ctxt->addr_size == 64 || !(offset >> 32)); > + else > + assert(is_x86_system_segment(seg) && !(offset >> 48)); Why 48? For GDTR/IDTR, the limit is explicitly 16 bits. For LDTR, the limit is 32 bits, but only 16 bits worth of offset can be architecturally reached. For TR, the limit is also 32 bits, but only 17 bits can be be reached (given some specific IO_BITMAP_OFFSET choices). Everything else looks fine, though. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |