|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 2/6] x86/shadow: Fixes to hvm_emulate_insn_fetch()
Zero-legnth reads are jump-target segmentation checks; never serve them from
the cache.
Force insn_off to a single byte, as offset can wrap around or truncate with
respect to sh_ctxt->insn_buf_eip under a number of normal circumstances.
Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Tim Deegan <tim@xxxxxxx>
CC: Jan Beulich <JBeulich@xxxxxxxx>
---
xen/arch/x86/mm/shadow/common.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 2e64a77..deea03a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -235,12 +235,16 @@ hvm_emulate_insn_fetch(enum x86_segment seg,
{
struct sh_emulate_ctxt *sh_ctxt =
container_of(ctxt, struct sh_emulate_ctxt, ctxt);
- unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
+ /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
+ uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
ASSERT(seg == x86_seg_cs);
- /* Fall back if requested bytes are not in the prefetch cache. */
- if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
+ /*
+ * Fall back if requested bytes are not in the prefetch cache, but always
+ * perform the zero-length read for segmentation purposes.
+ */
+ if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
return hvm_read(seg, offset, p_data, bytes,
hvm_access_insn_fetch, sh_ctxt);
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |