[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 12/32] libxl: Design of an async API to issue QMP commands to QEMUç
On Thu, Aug 02, 2018 at 11:01:43AM +0200, Roger Pau Monné wrote: > On Fri, Jul 27, 2018 at 03:05:54PM +0100, Anthony PERARD wrote: > > All the functions will be implemented in later patches. > > > > This patch includes the API that libxl can use to send QMP commands to > > QEMU. > > > > Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> > > --- > > tools/libxl/libxl_internal.h | 76 +++++++++++++++++++++++++++++++++++- > > 1 file changed, 74 insertions(+), 2 deletions(-) > > > > diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h > > index 5b71a23d23..c453ac10a5 100644 > > --- a/tools/libxl/libxl_internal.h > > +++ b/tools/libxl/libxl_internal.h > > @@ -202,6 +202,8 @@ typedef struct libxl__ao libxl__ao; > > typedef struct libxl__aop_occurred libxl__aop_occurred; > > typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus; > > typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi; > > +typedef struct libxl__json_object libxl__json_object; > > +typedef struct libxl__carefd libxl__carefd; > > > > typedef struct libxl__domain_create_state libxl__domain_create_state; > > typedef void libxl__domain_create_cb(struct libxl__egc *egc, > > @@ -357,6 +359,76 @@ struct libxl__ev_child { > > LIBXL_LIST_ENTRY(struct libxl__ev_child) entry; > > }; > > > > +/* > > + * QMP asynchronous calls > > + */ > > Place the title in the same comment block? Will do. > > +/* > > + * This facility allows a command to be sent to QEMU, and the response to > > be > > + * handed to a callback function. Each libxl__ev_qmp handles zero or one > > Do you really mean 'zero or one' or 'zero or more'? It is zero or one. But I'm not sur that extra sentence is usefull. Initially, it was followed by "if multiple commands are to be sent concurrently, multiple libxl__ev_qmp's must be used." But concurency between multiple libxl__ev_qmp isn't possible, the second libxl__ev_qmp created will have to wait until the first one is been disposed of. But sending multiple command can be done, by waiting for the first one to be done, then then the second one. This is why I have the paragraph about "commands can be chained". > > + * outstanding command. > > + * > > + * Commands can be chained, with a same connection. (e.g. "add-fd" will > > need to > ^ the > > + * be chained to the next command). A libxl__ev_qmp can be reused when the > ^ after > > + * callback is been called in order to use the same connection. > ^ has Writing "after the callback has been called", would that mean that one would need to wait that the callback returns before reusing the libxl__ev_qmp? That is what I meant, there is no need to wait for the callback to return before reusing the same libxl__ev_qmp. > > + * > > + * Only one connection at a time can be made to one QEMU, so avoid keeping > > a > ^ to "To avoid" feels like a how-to, I intended to write what should be done and that it is not an option. > > + * libxl__ev_qmp Connected for to long and call libxl__ev_qmp_dispose as > > soon > ^ unneeded cap ^ remove 'and' The use of the capital was an attempt to say that it is one of the possible states in which libxl__ev_qmp can be. > > + * as it is not needed anymore. > > + * > > + * Possible states of a libxl__ev_qmp: > > + * Undefined > > + * Might contain anything. > > + * Idle > > + * Struct contents are defined enough to pass to any libxl__ev_qmp_* > > + * functions. > ^ function > > + * The struct does not contain references to any allocated private > > resources > > + * so can be thrown away. > > I would add '... can be thrown away without any teardown.' I don't think that's usefull to point out. One would have to call the teardown function (_dispose) in order to have an Idle state, anyway. > > + * Active > > + * Currently waiting for the callback to be called. > > + * _dispose must be called to reclaim resources. > > + * Connected > > + * Struct contain allocated ressources. > > + * Calling _send() with this same ev will use the same QMP connection. > > + * _dispose() must be called to reclaim resources. > > + * > > + * libxl__ev_qmp_init: Undefined/Idle -> Idle > > + * > > + * libxl__ev_qmp_send: Idle/Connected -> Active (on error: Idle) > > + * Sends a command to QEMU. > > + * callback will be called when a response is received or when an error > > + * as occured. > > + * > > + * libxl__ev_qmp_dispose: Connected/Active/Idle -> Idle > > + * > > + * callback: When called: Active -> Connected > > + * When called, ev is Connected and can be reused or disposed of. > > + * When an error occured, it is called with response == NULL and the > > error > ^ If > > + * code in rc. > > + * The callback is only called once. > > + */ > > +typedef struct libxl__ev_qmp libxl__ev_qmp; > > +typedef void libxl__ev_qmp_callback(libxl__egc *egc, libxl__ev_qmp *ev, > > + const libxl__json_object *response, > > + int rc); > > + > > +_hidden void libxl__ev_qmp_init(libxl__ev_qmp *ev); > > +_hidden int libxl__ev_qmp_send(libxl__gc *gc, libxl__ev_qmp *ev, > > + const char *cmd, libxl__json_object *args); > > +_hidden void libxl__ev_qmp_dispose(libxl__gc *gc, libxl__ev_qmp *ev); > > + > > +struct libxl__ev_qmp { > > + /* caller should include this in their own struct */ > > + /* caller must fill these in, and they must all remain valid */ > ^ no need for 'all' > > + uint32_t domid; > > Strictly speaking domid is an uint16_t. Nothing in libxl agrees with this statement. domid is almost always stored in an uint32_t (sometime an int). There's event a libxl_domid type (I just found out!) which is uint32_t. Maybe I could try to use libxl_domid, and try to spread it's usage inside libxl. > > + libxl__ev_qmp_callback *callback; > > + libxl__carefd *cfd; /* set to send a fd with the command, NULL > > otherwise */ > > + > > + /* remaining fields are private to libxl_ev_qmp_* */ > > + > > Extra newline? > > > + int id; > > +}; > > + > > Thanks, Roger. Thanks, -- Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |