[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common



On Fri, 7 Aug 2020, Oleksandr wrote:
> On 06.08.20 03:37, Stefano Stabellini wrote:
> 
> Hi Stefano
> 
> Trying to simulate IO_RETRY handling mechanism (according to model below) I
> continuously get IO_RETRY from try_fwd_ioserv() ...
> 
> > OK, thanks for the details. My interpretation seems to be correct.
> > 
> > In which case, it looks like xen/arch/arm/io.c:try_fwd_ioserv should
> > return IO_RETRY. Then, xen/arch/arm/traps.c:do_trap_stage2_abort_guest
> > also needs to handle try_handle_mmio returning IO_RETRY the first
> > around, and IO_HANDLED when after QEMU does its job.
> > 
> > What should do_trap_stage2_abort_guest do on IO_RETRY? Simply return
> > early and let the scheduler do its job? Something like:
> > 
> >              enum io_state state = try_handle_mmio(regs, hsr, gpa);
> > 
> >              switch ( state )
> >              {
> >              case IO_ABORT:
> >                  goto inject_abt;
> >              case IO_HANDLED:
> >                  advance_pc(regs, hsr);
> >                  return;
> >              case IO_RETRY:
> >                  /* finish later */
> >                  return;
> >              case IO_UNHANDLED:
> >                  /* IO unhandled, try another way to handle it. */
> >                  break;
> >              default:
> >                  ASSERT_UNREACHABLE();
> >              }
> > 
> > Then, xen/arch/arm/ioreq.c:handle_mmio() gets called by
> > handle_hvm_io_completion() after QEMU completes the emulation. Today,
> > handle_mmio just sets the user register with the read value.
> > 
> > But it would be better if it called again the original function
> > do_trap_stage2_abort_guest to actually retry the original operation.
> > This time do_trap_stage2_abort_guest calls try_handle_mmio() and gets
> > IO_HANDLED instead of IO_RETRY,
> I may miss some important point, but I failed to see why try_handle_mmio
> (try_fwd_ioserv) will return IO_HANDLED instead of IO_RETRY at this stage.
> Or current try_fwd_ioserv() logic needs rework?

I think you should check the ioreq->state in try_fwd_ioserv(), if the
result is ready, then ioreq->state should be STATE_IORESP_READY, and you
can return IO_HANDLED.

That is assuming that you are looking at the live version of the ioreq
shared with QEMU instead of a private copy of it, which I am not sure.
Looking at try_fwd_ioserv() it would seem that vio->io_req is just a
copy? The live version is returned by get_ioreq() ?

Even in handle_hvm_io_completion, instead of setting vio->io_req.state
to STATE_IORESP_READY by hand, it would be better to look at the live
version of the ioreq because QEMU will have already set ioreq->state
to STATE_IORESP_READY (hw/i386/xen/xen-hvm.c:cpu_handle_ioreq).

 
> > thus, it will advance_pc (the program
> > counter) completing the handling of this instruction.
> > 
> > The user register with the read value could be set by try_handle_mmio if
> > try_fwd_ioserv returns IO_HANDLED instead of IO_RETRY.
> > 
> > Is that how the state machine is expected to work?



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.