[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH] Mini-OS to use evtchn_port_t for ports and other improvements



Sorry about the delay responding, I've been busy with other things.

This mostly looks like a pretty reasonable bit of cleanup, with just a
few minor niggles.

> @@ -86,8 +86,7 @@
>       ev_actions[port].data = NULL;
>  }
>  
> -int bind_virq( u32 virq, void (*handler)(int, struct pt_regs *, void *data),
> -                        void *data)
> +int bind_virq(evtchn_port_t virq, evtchn_handler_t handler, void *data)
Did you mean this?  A virq isn't the same as an evtchn_port_t, and is
usually uint32_t in the Xen header files.

>  {
>       evtchn_op_t op;
>  
> @@ -105,7 +104,7 @@
>       return 0;
>  }
>  
> -void unbind_virq( u32 port )
> +void unbind_virq(evtchn_port_t port)
>  {
>       unbind_evtchn(port);
>  }
Hmm... not your fault, but unbinding from virqs is broken.  You need
to know the port number, but there's no way of getting it from
bind_virq.  Given that noone in the tree currently uses either
unbind_virq, I'd be inclined to just kill it.

> +static inline evtchn_port_t
> +maybe_bind_evtchn(evtchn_port_t port, evtchn_handler_t handler, void *data)
> +{
> +    if (handler)
> +             return bind_evtchn(port, handler, data);
> +    else
> +             return port;
> +}

> +
> +int evtchn_alloc_unbound(domid_t pal, evtchn_handler_t handler,
> +                                              void *data, evtchn_port_t 
> *port)
> +{
> +    evtchn_op_t op;
> +    op.cmd = EVTCHNOP_alloc_unbound;
> +    op.u.alloc_unbound.dom = DOMID_SELF;
> +    op.u.alloc_unbound.remote_dom = pal;
> +    int err = HYPERVISOR_event_channel_op(&op);
> +    if (err)
> +             return err;
> +    *port = maybe_bind_evtchn(op.u.alloc_unbound.port, handler, data);
Why maybe_bind?  Do you ever expect to need to allocate an unbound
event channel before you know what handler to use for it?

> +    return err;
> +}
> +
> +/* Connect to a port so as to allow the exchange of notifications with
> +   the pal. Returns the result of the hypervisor call. */
> +
> +int evtchn_bind_interdomain(domid_t pal, evtchn_port_t remote_port,
> +                                                     evtchn_handler_t 
> handler, void *data,
> +                                                     evtchn_port_t 
> *local_port)
> +{
> +    evtchn_op_t op;
> +    op.cmd = EVTCHNOP_bind_interdomain;
> +    op.u.bind_interdomain.remote_dom = pal;
> +    op.u.bind_interdomain.remote_port = remote_port;
> +    int err = HYPERVISOR_event_channel_op(&op);
> +    if (err)
> +             return err;
> +    evtchn_port_t port = op.u.bind_interdomain.local_port;
> +    clear_evtchn(port);            /* Without, handler gets invoked now! */
Invoking the handler as soon as you bind the interdomain channel is a
mostly-deliberate part of the interface.  If the other end makes
notifications before you get around to binding they can get lost, and
forcing the channel to fire as soon as you bind to it avoids some
potential lost wakeups.

> +    *local_port = maybe_bind_evtchn(port, handler, data);
Again, why maybe_bind?

> @@ -137,6 +137,24 @@
>      return gref;
>  }
>  
> +static char *gnttabop_error_msgs[] = GNTTABOP_error_msgs;
Maybe static const char *const gnttabop_error_msgs[] ?

> +
> +static const size_t gnttabop_error_size =
> +    sizeof(gnttabop_error_msgs)/sizeof(gnttabop_error_msgs[0]);
Use ARRAY_SIZE here.


Other than that, this looks pretty good to me.

Steven.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.