[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen staging] xen/arm: Call vcpu_ioreq_handle_completion() in check_for_vcpu_work()

commit 05b9c98e273695f626e667d9899bc16193d2e2c4
Author:     Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
AuthorDate: Fri Jan 29 03:48:43 2021 +0200
Commit:     Julien Grall <jgrall@xxxxxxxxxx>
CommitDate: Fri Jan 29 16:55:23 2021 +0000

    xen/arm: Call vcpu_ioreq_handle_completion() in check_for_vcpu_work()
    This patch adds remaining bits needed for the IOREQ support on Arm.
    Besides just calling vcpu_ioreq_handle_completion() we need to handle
    it's return value to make sure that all the vCPU works are done before
    we return to the guest (the vcpu_ioreq_handle_completion() may return
    false if there is vCPU work to do or IOREQ state is invalid).
    For that reason we use an unbounded loop in leave_hypervisor_to_guest().
    The worse that can happen here if the vCPU will never run again
    (the I/O will never complete). But, in Xen case, if the I/O never
    completes then it most likely means that something went horribly
    wrong with the Device Emulator. And it is most likely not safe
    to continue. So letting the vCPU to spin forever if the I/O never
    completes is a safer action than letting it continue and leaving
    the guest in unclear state and is the best what we can do for now.
    Please note, using this loop we will not spin forever on a pCPU,
    preventing any other vCPUs from being scheduled. At every loop
    we will call check_for_pcpu_work() that will process pending
    softirqs. In case of failure, the guest will crash and the vCPU
    will be unscheduled. In normal case, if the rescheduling is necessary
    the vCPU will be rescheduled to give place to someone else.
    Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
    Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
    Acked-by: Julien Grall <jgrall@xxxxxxxxxx>
    CC: Julien Grall <julien.grall@xxxxxxx>
    [On Arm only]
    Tested-by: Wei Chen <Wei.Chen@xxxxxxx>
 xen/arch/arm/traps.c | 26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 88487644c7..cb37a45b24 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -21,6 +21,7 @@
 #include <xen/hypercall.h>
 #include <xen/init.h>
 #include <xen/iocap.h>
+#include <xen/ioreq.h>
 #include <xen/irq.h>
 #include <xen/lib.h>
 #include <xen/mem_access.h>
@@ -2269,12 +2270,23 @@ static void check_for_pcpu_work(void)
  * Process pending work for the vCPU. Any call should be fast or
  * implement preemption.
-static void check_for_vcpu_work(void)
+static bool check_for_vcpu_work(void)
     struct vcpu *v = current;
+    bool handled;
+    local_irq_enable();
+    handled = vcpu_ioreq_handle_completion(v);
+    local_irq_disable();
+    if ( !handled )
+        return true;
     if ( likely(!v->arch.need_flush_to_ram) )
-        return;
+        return false;
      * Give a chance for the pCPU to process work before handling the vCPU
@@ -2285,6 +2297,8 @@ static void check_for_vcpu_work(void)
+    return false;
@@ -2297,7 +2311,13 @@ void leave_hypervisor_to_guest(void)
-    check_for_vcpu_work();
+    /*
+     * check_for_vcpu_work() may return true if there are more work to before
+     * the vCPU can safely resume. This gives us an opportunity to deschedule
+     * the vCPU if needed.
+     */
+    while ( check_for_vcpu_work() )
+        check_for_pcpu_work();
generated by git-patchbot for /home/xen/git/xen.git#staging



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.