[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers



> -----Original Message-----
> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Sent: 21 August 2019 15:59
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Paul Durrant 
> <Paul.Durrant@xxxxxxxxxx>; Jan Beulich
> <jbeulich@xxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu 
> <wl@xxxxxxx>
> Subject: [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers
> 
> Internal ioreq servers are always processed first, and ioreqs are
> dispatched by calling the handler function. If no internal servers have
> registered for an ioreq it's then forwarded to external callers.

Distinct id ranges would help here... Internal ids could be walked first, then 
external. If there's no possibility of interleaving then you don't need the 
retry.

  Paul

> 
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/ioreq.c | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 23ef9b0c02..3fb6fe9585 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1305,6 +1305,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct 
> domain *d,
>      uint8_t type;
>      uint64_t addr;
>      unsigned int id;
> +    bool internal = true;
> 
>      if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>          return NULL;
> @@ -1345,11 +1346,12 @@ struct hvm_ioreq_server 
> *hvm_select_ioreq_server(struct domain *d,
>          addr = p->addr;
>      }
> 
> + retry:
>      FOR_EACH_IOREQ_SERVER(d, id, s)
>      {
>          struct rangeset *r;
> 
> -        if ( !s->enabled )
> +        if ( !s->enabled || s->internal != internal )
>              continue;
> 
>          r = s->range[type];
> @@ -1387,6 +1389,12 @@ struct hvm_ioreq_server 
> *hvm_select_ioreq_server(struct domain *d,
>          }
>      }
> 
> +    if ( internal )
> +    {
> +        internal = false;
> +        goto retry;
> +    }
> +
>      return NULL;
>  }
> 
> @@ -1492,9 +1500,18 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t 
> *proto_p,
> 
>      ASSERT(s);
> 
> +    if ( s->internal && buffered )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
>      if ( buffered )
>          return hvm_send_buffered_ioreq(s, proto_p);
> 
> +    if ( s->internal )
> +        return s->handler(curr, proto_p);
> +
>      if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
>          return X86EMUL_RETRY;
> 
> --
> 2.22.0

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.