[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/HVM: suppress I/O completion for port output


  • To: 'Jan Beulich' <JBeulich@xxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Thu, 29 Mar 2018 08:54:15 +0000
  • Accept-language: en-GB, en-US
  • Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Delivery-date: Thu, 29 Mar 2018 08:54:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHTxzLdsHMYJv/lvEa1lw0pjB6PdqPm5oaA
  • Thread-topic: [PATCH RFC] x86/HVM: suppress I/O completion for port output

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 29 March 2018 08:52
> To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant
> <Paul.Durrant@xxxxxxxxxx>
> Subject: [PATCH RFC] x86/HVM: suppress I/O completion for port output
> 
> We don't break up port requests in case they cross emulation entity
> boundaries, and a write to an I/O port is necessarily the last
> operation of an instruction instance, so there's no need to re-invoke
> the full emulation path upon receiving the result from an external
> emulator.
> 
> In case we want to properly split port accesses in the future, this
> change will need to be reverted, as it would prevent things working
> correctly when e.g. the first part needs to go to an external emulator,
> while the second part is to be handled internally.
> 
> While this addresses the reported problem of Windows paging out the
> buffer underneath an in-process REP OUTS, it does not address the wider
> problem of the re-issued insn (to the insn emulator) being prone to
> raise an exception (#PF) during a replayed, previously successful memory
> access (we only record prior MMIO accesses).
> 
> Leaving aside the problem tried to be worked around here, I think the
> performance aspect alone is a good reason to change the behavior.
> 
> Also take the opportunity and change bool_t -> bool as
> hvm_vcpu_io_need_completion()'s return type.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> RFC: At this point it is only a hypothesis that this change addresses
>      the observed issue. IOW testing in the actual environment is still
>      pending.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -282,7 +282,7 @@ static int hvmemul_do_io(
>              rc = hvm_send_ioreq(s, &p, 0);
>              if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
>                  vio->io_req.state = STATE_IOREQ_NONE;
> -            else if ( data_is_addr )
> +            else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) )

I'm not entirely sure, but it seems like this test might actually be 
!hvm_vcpu_io_need_completion()...

>                  rc = X86EMUL_OKAY;
>          }
>          break;
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -91,10 +91,12 @@ struct hvm_vcpu_io {
>      const struct g2m_ioport *g2m_ioport;
>  };
> 
> -static inline bool_t hvm_vcpu_io_need_completion(const struct
> hvm_vcpu_io *vio)
> +static inline bool hvm_vcpu_io_need_completion(const struct
> hvm_vcpu_io *vio)
>  {
>      return (vio->io_req.state == STATE_IOREQ_READY) &&
> -           !vio->io_req.data_is_ptr;
> +           !vio->io_req.data_is_ptr &&
> +           (vio->io_req.type != IOREQ_TYPE_PIO ||
> +            vio->io_req.dir != IOREQ_WRITE);

... now that you've updated it here.

  Paul

>  }
> 
>  struct nestedvcpu {
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.