[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.10] x86/HVM: don't crash guest in hvmemul_find_mmio_cache()



commit 3f5490d7e442db3dc65d784ee3c087f7e41f5a06
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri May 3 11:04:32 2019 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri May 3 11:04:32 2019 +0200

    x86/HVM: don't crash guest in hvmemul_find_mmio_cache()
    
    Commit 35a61c05ea ("x86emul: adjust handling of AVX2 gathers") builds
    upon the fact that the domain will actually survive running out of MMIO
    result buffer space. Drop the domain_crash() invocation. Also delay
    incrementing of the usage counter, such that the function can't possibly
    use/return an out-of-bounds slot/pointer in case execution subsequently
    makes it into the function again without a prior reset of state.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    master commit: a43c1dec246bdee484e6a3de001cc6850a107abe
    master date: 2019-03-12 14:39:46 +0100
---
 xen/arch/x86/hvm/emulate.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 3bf4cfe9f0..fae93bc748 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -945,12 +945,11 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache(
             return cache;
     }
 
-    i = vio->mmio_cache_count++;
+    i = vio->mmio_cache_count;
     if( i == ARRAY_SIZE(vio->mmio_cache) )
-    {
-        domain_crash(current->domain);
         return NULL;
-    }
+
+    ++vio->mmio_cache_count;
 
     cache = &vio->mmio_cache[i];
     memset(cache, 0, sizeof (*cache));
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.10

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.