[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 13/17] x86/hvm: remove HVMIO_dispatched I/O state



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 24 June 2015 16:53
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v4 13/17] x86/hvm: remove HVMIO_dispatched I/O
> state
> 
> >>> On 24.06.15 at 13:24, <paul.durrant@xxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -138,20 +138,14 @@ static int hvmemul_do_io(
> >          if ( data_is_addr || dir == IOREQ_WRITE )
> >              return X86EMUL_UNHANDLEABLE;
> >          goto finish_access;
> > -    case HVMIO_dispatched:
> > -        /* May have to wait for previous cycle of a multi-write to 
> > complete. */
> > -        if ( is_mmio && !data_is_addr && (dir == IOREQ_WRITE) &&
> > -             (addr == (vio->mmio_large_write_pa +
> > -                       vio->mmio_large_write_bytes)) )
> > -            return X86EMUL_RETRY;
> > -        /* fallthrough */
> 
> The description is plausible, and the rest of the patch looks plausible
> too, but I can't seem to follow why the above can go away without
> any replacement (or what the replacement would be). Could you
> clarify that, please?
> 

The reason is that the dispatched state was only ever used for writes (or 
data_is_addr, which is disallowed in this case), and the code below this forces 
writes to be completed with X86EMUL_OKAY meaning that emulation will never be 
retried (because it's not retried directly from hvm_io_assist() unless state 
was HVMIO_mmio_awaiting_completion). Thus I believe this was either always dead 
code, or it's been dead for a long time.

Do you want me to stick words to that effect in the comment?

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.