[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers



On 03.09.2019 18:14, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1493,9 +1493,18 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, 
> bool buffered)
>  
>      ASSERT(s);
>  
> +    if ( hvm_ioreq_is_internal(id) && buffered )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
>      if ( buffered )
>          return hvm_send_buffered_ioreq(s, proto_p);

Perhaps better (to avoid yet another conditional on the non-
buffered path)

    if ( buffered )
    {
        if ( likely(!hvm_ioreq_is_internal(id)) )
            return hvm_send_buffered_ioreq(s, proto_p);

        ASSERT_UNREACHABLE();
        return X86EMUL_UNHANDLEABLE;
    }

?

> +    if ( hvm_ioreq_is_internal(id) )
> +        return s->handler(curr, proto_p, s->data);

At this point I'm becoming curious what the significance of
ioreq_t's state field is for internal servers, as nothing was
said so far in this regard: Is it entirely unused? Is every
handler supposed to drive it? If so, what about return value
here and proto_p->state not really matching up? And if not,
shouldn't you update the field here, at the very least to
avoid any chance of confusing callers?

A possible consequence of the answers to this might be for
the hook's middle parameter to be constified (in patch 4).

Having said this, as a result of having looked at some of the
involved code, and with the cover letter not clarifying this,
what's the reason for going this seemingly more complicated
route, rather than putting vPCI behind the hvm_io_intercept()
machinery, just like is the case for other internal handling?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.