[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.9] x86/HVM: correct repeat count update in linear->phys translation



commit b244ac995c7c1ef79fc0e7cfbad146ec8f0445f3
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri Oct 6 14:57:55 2017 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Oct 6 14:57:55 2017 +0200

    x86/HVM: correct repeat count update in linear->phys translation
    
    For the insn emulator's fallback logic in REP INS/OUTS handling
    to work correctly, *reps must not be set to zero when returning
    X86EMUL_UNHANDLEABLE.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    master commit: 49160d205236d8e36d27d40b6bf69b9b75f2c333
    master date: 2017-09-08 16:23:46 +0200
---
 xen/arch/x86/hvm/emulate.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index ca13722..af09dcc 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -566,15 +566,16 @@ static int hvmemul_linear_to_phys(
             if ( pfec & (PFEC_page_paged | PFEC_page_shared) )
                 return X86EMUL_RETRY;
             done /= bytes_per_rep;
-            *reps = done;
             if ( done == 0 )
             {
                 ASSERT(!reverse);
                 if ( npfn != gfn_x(INVALID_GFN) )
                     return X86EMUL_UNHANDLEABLE;
+                *reps = 0;
                 x86_emul_pagefault(pfec, addr & PAGE_MASK, 
&hvmemul_ctxt->ctxt);
                 return X86EMUL_EXCEPTION;
             }
+            *reps = done;
             break;
         }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.9

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.