[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed


  • To: Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Mon, 30 Nov 2020 20:51:09 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KArdJbSp/HmifNqXna5we7oWwJ7unOhPaZmPbRWmTe8=; b=i6v2xrFUUSI+wAW6qesgo1s5KJwYzGUYXN8JCVLova7ksXY6NzVNTbqdsm+FzbPKXUjjptAg++hqgNnlIhNjTdXfJhiyJjMvmqAnau1G2Nx3XV1Zd5DsuhfrwukVhijE6VDyOSJ4rSrP2T/FugIfzV5sK5sj4/LYSlHG/he5G8Vaowm5i/XHIevr1I3iuYgTo6HY71x303GMSFkR3XBCbk22s7/V0uSidybQr39txKDARCrgh1aHf8h1MrMQvDL92Tc4f42wwCzOpDEXJSbPE4sirRwl6w097DGTBXSmNqWKIXNjcitHbZjWxnKX/hEE/F/3WG0284Sab57vNacb9Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f5G95A4yQSj6Ic2y/ZmjOuT5ZhM3KkNdvrksELOCcMkqNVeXkhobGd2AwCOhKH2VYquJnHO98LeMdcnFFy7Cdjvz+kKlU7UTF0YReH4WE0nGPiquUOhAdhZBwXjKHxVYyoGlbMqzezXoAdat+tYr/TABoAvr+dhhbhfsXaDtFIrQAdEW7FMOoP92Iq19LWrRRbCDpaKa5SHnR9/Z+IYQfwd17MEra1w/JFYTwKtphw3e0XOt+CFs7sqmB3BUFR4Bfp5+F9wISXT3uBba2iAALHyrsCt6lyyllrzqnzunP53TstyC0BK0co3zv39t9E39aIEg/xZfr9RyKpwX3DlzgA==
  • Authentication-results: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=epam.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Julien Grall <julien.grall@xxxxxxx>
  • Delivery-date: Mon, 30 Nov 2020 20:51:35 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHWxwQMBbxecCO+BEuoRSAjYYX8W6ngeeYAgACtHwA=
  • Thread-topic: [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed

Hello Oleksandr,

Oleksandr Tyshchenko writes:

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>
> This patch adds proper handling of return value of
> vcpu_ioreq_handle_completion() which involves using a loop
> in leave_hypervisor_to_guest().
>
> The reason to use an unbounded loop here is the fact that vCPU
> shouldn't continue until an I/O has completed. In Xen case, if an I/O
> never completes then it most likely means that something went horribly
> wrong with the Device Emulator. And it is most likely not safe to
> continue. So letting the vCPU to spin forever if I/O never completes
> is a safer action than letting it continue and leaving the guest in
> unclear state and is the best what we can do for now.
>
> This wouldn't be an issue for Xen as do_softirq() would be called at
> every loop. In case of failure, the guest will crash and the vCPU
> will be unscheduled.
>

Why you don't block vcpu there and unblock it when response is ready? If
I got it right, "client" vcpu will spin in the loop, eating own
scheduling budget with no useful work done. In the worst case, it will
prevent "server" vcpu from running.

> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
> CC: Julien Grall <julien.grall@xxxxxxx>
>
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>
> Changes V1 -> V2:
>    - new patch, changes were derived from (+ new explanation):
>      arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>
> Changes V2 -> V3:
>    - update patch description
> ---
> ---
>  xen/arch/arm/traps.c | 31 ++++++++++++++++++++++++++-----
>  1 file changed, 26 insertions(+), 5 deletions(-)
>
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 036b13f..4cef43e 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2257,18 +2257,23 @@ static void check_for_pcpu_work(void)
>   * Process pending work for the vCPU. Any call should be fast or
>   * implement preemption.
>   */
> -static void check_for_vcpu_work(void)
> +static bool check_for_vcpu_work(void)
>  {
>      struct vcpu *v = current;
>  
>  #ifdef CONFIG_IOREQ_SERVER
> +    bool handled;
> +
>      local_irq_enable();
> -    vcpu_ioreq_handle_completion(v);
> +    handled = vcpu_ioreq_handle_completion(v);
>      local_irq_disable();
> +
> +    if ( !handled )
> +        return true;
>  #endif
>  
>      if ( likely(!v->arch.need_flush_to_ram) )
> -        return;
> +        return false;
>  
>      /*
>       * Give a chance for the pCPU to process work before handling the vCPU
> @@ -2279,6 +2284,8 @@ static void check_for_vcpu_work(void)
>      local_irq_enable();
>      p2m_flush_vm(v);
>      local_irq_disable();
> +
> +    return false;
>  }
>  
>  /*
> @@ -2291,8 +2298,22 @@ void leave_hypervisor_to_guest(void)
>  {
>      local_irq_disable();
>  
> -    check_for_vcpu_work();
> -    check_for_pcpu_work();
> +    /*
> +     * The reason to use an unbounded loop here is the fact that vCPU
> +     * shouldn't continue until an I/O has completed. In Xen case, if an I/O
> +     * never completes then it most likely means that something went horribly
> +     * wrong with the Device Emulator. And it is most likely not safe to
> +     * continue. So letting the vCPU to spin forever if I/O never completes
> +     * is a safer action than letting it continue and leaving the guest in
> +     * unclear state and is the best what we can do for now.
> +     *
> +     * This wouldn't be an issue for Xen as do_softirq() would be called at
> +     * every loop. In case of failure, the guest will crash and the vCPU
> +     * will be unscheduled.
> +     */
> +    do {
> +        check_for_pcpu_work();
> +    } while ( check_for_vcpu_work() );
>  
>      vgic_sync_to_lrs();


-- 
Volodymyr Babchuk at EPAM


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.