[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [qemu-upstream-unstable] coroutine: Fix use after free with qemu_coroutine_yield()



commit 07db6859abffa79db6290a5f9f4dfdf93148189f
Author:     Kevin Wolf <kwolf@xxxxxxxxxx>
AuthorDate: Tue Feb 10 11:17:53 2015 +0100
Commit:     Michael Roth <mdroth@xxxxxxxxxxxxxxxxxx>
CommitDate: Sun Mar 8 22:58:14 2015 -0500

    coroutine: Fix use after free with qemu_coroutine_yield()
    
    Instead of using the same function for entering and exiting coroutines,
    and hoping that it doesn't add any functionality that hurts with the
    parameters used for exiting, we can just directly call into the real
    task switch in qemu_coroutine_switch().
    
    This fixes a use-after-free scenario where reentering a coroutine that
    has yielded still accesses the old parent coroutine (which may have
    meanwhile terminated) in the part of coroutine_swap() that follows
    qemu_coroutine_switch().
    
    Cc: qemu-stable@xxxxxxxxxx
    Signed-off-by: Kevin Wolf <kwolf@xxxxxxxxxx>
    Reviewed-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
    (cherry picked from commit 80687b4dd6f43b3fef61fef8fbcb358457350562)
    Signed-off-by: Michael Roth <mdroth@xxxxxxxxxxxxxxxxxx>
---
 qemu-coroutine.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/qemu-coroutine.c b/qemu-coroutine.c
index bd574aa..0101855 100644
--- a/qemu-coroutine.c
+++ b/qemu-coroutine.c
@@ -135,7 +135,7 @@ void coroutine_fn qemu_coroutine_yield(void)
     }
 
     self->caller = NULL;
-    coroutine_swap(self, to);
+    qemu_coroutine_switch(self, to, COROUTINE_YIELD);
 }
 
 void qemu_coroutine_adjust_pool_size(int n)
--
generated by git-patchbot for /home/xen/git/qemu-upstream-unstable.git

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.