[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common



On Mon, 10 Aug 2020, Oleksandr wrote:
> On 08.08.20 01:19, Oleksandr wrote:
> > On 08.08.20 00:50, Stefano Stabellini wrote:
> > > On Fri, 7 Aug 2020, Oleksandr wrote:
> > > > On 06.08.20 03:37, Stefano Stabellini wrote:
> > > > 
> > > > Hi Stefano
> > > > 
> > > > Trying to simulate IO_RETRY handling mechanism (according to model
> > > > below) I
> > > > continuously get IO_RETRY from try_fwd_ioserv() ...
> > > > 
> > > > > OK, thanks for the details. My interpretation seems to be correct.
> > > > > 
> > > > > In which case, it looks like xen/arch/arm/io.c:try_fwd_ioserv should
> > > > > return IO_RETRY. Then, xen/arch/arm/traps.c:do_trap_stage2_abort_guest
> > > > > also needs to handle try_handle_mmio returning IO_RETRY the first
> > > > > around, and IO_HANDLED when after QEMU does its job.
> > > > > 
> > > > > What should do_trap_stage2_abort_guest do on IO_RETRY? Simply return
> > > > > early and let the scheduler do its job? Something like:
> > > > > 
> > > > >               enum io_state state = try_handle_mmio(regs, hsr, gpa);
> > > > > 
> > > > >               switch ( state )
> > > > >               {
> > > > >               case IO_ABORT:
> > > > >                   goto inject_abt;
> > > > >               case IO_HANDLED:
> > > > >                   advance_pc(regs, hsr);
> > > > >                   return;
> > > > >               case IO_RETRY:
> > > > >                   /* finish later */
> > > > >                   return;
> > > > >               case IO_UNHANDLED:
> > > > >                   /* IO unhandled, try another way to handle it. */
> > > > >                   break;
> > > > >               default:
> > > > >                   ASSERT_UNREACHABLE();
> > > > >               }
> > > > > 
> > > > > Then, xen/arch/arm/ioreq.c:handle_mmio() gets called by
> > > > > handle_hvm_io_completion() after QEMU completes the emulation. Today,
> > > > > handle_mmio just sets the user register with the read value.
> > > > > 
> > > > > But it would be better if it called again the original function
> > > > > do_trap_stage2_abort_guest to actually retry the original operation.
> > > > > This time do_trap_stage2_abort_guest calls try_handle_mmio() and gets
> > > > > IO_HANDLED instead of IO_RETRY,
> > > > I may miss some important point, but I failed to see why try_handle_mmio
> > > > (try_fwd_ioserv) will return IO_HANDLED instead of IO_RETRY at this
> > > > stage.
> > > > Or current try_fwd_ioserv() logic needs rework?
> > > I think you should check the ioreq->state in try_fwd_ioserv(), if the
> > > result is ready, then ioreq->state should be STATE_IORESP_READY, and you
> > > can return IO_HANDLED.
> > 
> 
> I optimized test patch a bit (now it looks much simpler). I didn't face any
> issues during a quick test.

Both patches get much closer to following the proper state machine,
great! I think this patch is certainly a good improvement. I think the
other patch you sent earlier, slightly larger, is even better. It makes
the following additional changes that would be good to have:

- try_fwd_ioserv returns IO_HANDLED on state == STATE_IORESP_READY
- handle_mmio simply calls do_trap_stage2_abort_guest

I would also remove "handle_mmio_finish" and do the guest register
setting as well as setting vio->io_req.state to STATE_IOREQ_NONE
directly in try_fwd_ioserv.

 
> ---
>  xen/arch/arm/io.c    | 4 ----
>  xen/arch/arm/ioreq.c | 7 ++++++-
>  xen/arch/arm/traps.c | 4 +++-
>  3 files changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> index 436f669..3063577 100644
> --- a/xen/arch/arm/io.c
> +++ b/xen/arch/arm/io.c
> @@ -156,10 +156,6 @@ static enum io_state try_fwd_ioserv(struct cpu_user_regs
> *regs,
>      else
>          vio->io_completion = HVMIO_mmio_completion;
> 
> -    /* XXX: Decide what to do */
> -    if ( rc 7== IO_RETRY )
> -        rc = IO_HANDLED;
> -
>      return rc;
>  }
>  #endif
> diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
> index 8f60c41..e5235c6 100644
> --- a/xen/arch/arm/ioreq.c
> +++ b/xen/arch/arm/ioreq.c
> @@ -33,6 +33,8 @@
>  #include <public/hvm/dm_op.h>
>  #include <public/hvm/ioreq.h>
> 
> +#include <asm/traps.h>
> +
>  bool handle_mmio(void)
>  {
>      struct vcpu *v = current;
> @@ -52,7 +54,7 @@ bool handle_mmio(void)
> 
>      /* XXX: Do we need to take care of write here ? */
>      if ( dabt.write )
> -        return true;
> +        goto done;
> 
>      /*
>       * Sign extend if required.
> @@ -72,6 +74,9 @@ bool handle_mmio(void)
> 
>      set_user_reg(regs, dabt.reg, r);
> 
> +done:
> +    advance_pc(regs, hsr);
> +
>      return true;
>  }
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index ea472d1..974c744 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1965,11 +1965,13 @@ static void do_trap_stage2_abort_guest(struct
> cpu_user_regs *regs,
>              case IO_HANDLED:
>                  advance_pc(regs, hsr);
>                  return;
> +            case IO_RETRY:
> +                /* finish later */
> +                return;
>              case IO_UNHANDLED:
>                  /* IO unhandled, try another way to handle it. */
>                  break;
>              default:
> -                /* XXX: Handle IO_RETRY */
>                  ASSERT_UNREACHABLE();
>              }
>          }

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.