[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 22/32] libxl_qmp: Handle messages from QEMU



On Mon, Aug 06, 2018 at 01:25:22PM +0200, Roger Pau Monné wrote:
> On Fri, Aug 03, 2018 at 06:25:00PM +0100, Anthony PERARD wrote:
> > On Thu, Aug 02, 2018 at 01:17:37PM +0200, Roger Pau Monné wrote:
> > > On Fri, Jul 27, 2018 at 03:06:04PM +0100, Anthony PERARD wrote:
> > > > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > > > index 4a385801ba..e104fea941 100644
> > > > --- a/tools/libxl/libxl_types.idl
> > > > +++ b/tools/libxl/libxl_types.idl
> > > > @@ -69,6 +69,10 @@ libxl_error = Enumeration("error", [
> > > >      (-23, "NOTFOUND"),
> > > >      (-24, "DOMAIN_DESTROYED"), # Target domain ceased to exist during 
> > > > op
> > > >      (-25, "FEATURE_REMOVED"), # For functionality that has been removed
> > > > +    (-26, "QMP_GENERIC_ERROR"), # unspecified qmp error
> > > > +    (-27, "QMP_COMMAND_NOT_FOUND"), # the requested command has not 
> > > > been found
> > > > +    (-28, "QMP_DEVICE_NOT_ACTIVE"), # a device has failed to be become 
> > > > active
> > > > +    (-29, "QMP_DEVICE_NOT_FOUND"), # the requested device has not been 
> > > > found
> > > 
> > > Do we really need such granularity for QMP errors? Isn't it enough to
> > > have a single ERROR_QMP or similar?
> > 
> > No I don't think so. QMP_COMMAND_NOT_FOUND can be useful when a qmp
> > command is removed (there is already one that is deprecated and we use).
> > 
> > The last two could be useful to user of libxl as they could provide
> > better error messages. xl don't care because whatever error message is
> > attach to the error, it will be printed.
> 
> I wonder if this set of errors will change very often, at the end of
> day it's not controlled by ourselves but QEMU, and then how much
> chasing will libxl have to do in order to keep up with new types of
> QMP errors.

libxl doesn't have to know all the possible error code. Whenever there's
a code that is not known, I simple return ERROR_FAIL to the caller. So
it doesn't matter how often new error codes are added. (Also, I've
already left out one error code which is KVM specific.)

> IMO I don't see a lot of value in returning specific QMP error codes
> from libxl, and I would even consider whether it's worth to have
> something like ERROR_QMP, because I don't think there's much the
> caller can do in case of QMP failure. It's an internal libxl QMP
> error, and it's not like the caller is going to start it's own QMP
> connection in order to speak with QEMU directly in order to solve
> whatever QMP issue it got from libxl.

Those error codes aren't "internal libxl QMP error", the QMP connection
is working perfectly fine. All those error comes from QEMU.

All those error codes are part of possible responses to a command.

Here are the possible way that a caller can solve an issue with each
error code:

GenericError:
  The command failed, there's a description that a Human can read. Not
  much else can be done.

CommandNotFound:
  Will, the QMP command that the caller is trying to use is unknown to
  the current QEMU, it's maybe a command introduced in newer version of
  QEMU, or a deprecated command that is now removed.
  Maybe the caller try to use a newer command that replace a deprecated
  one, on error, it could call the older version. And vice versa.

DeviceNotActive:
  The device that libxl is trying to add via the command wasn't bring up
  properly, the caller could try again with different parameters or
  inverstigate with "query-*" commands to find out what's missing.

DeviceNotFound:
  The device that the caller is trying to modify (like a cdrom drive) is
  not found in QEMU, the caller could try to add it or let the user know
  that the device doesn't exist.


None of those error code are QMP issue.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.