[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Xen-API C Bindings, version 0.4.1

On Wed, Aug 09, 2006 at 05:15:27AM +0100, Daniel P. Berrange wrote:

> On Tue, Aug 08, 2006 at 09:50:14PM +0100, Ewan Mellor wrote:
> > On Tue, Aug 08, 2006 at 08:11:07PM +0100, Daniel P. Berrange wrote:
> > > On Tue, Aug 08, 2006 at 05:00:23PM +0100, Ewan Mellor wrote:
> > > > I've added a record type to each of the classes, such as xen_vm_record, 
> > > > which
> > > > will be used for client-side storage of data fields.  I've added 
> > > > getters for
> > > > these records, such as xen_vm_get_record, which return one of these.
> > > 
> > > The design of the API feels a little odd to me - there are now basically 
> > > two
> > > ways of getting at every piece of information. Either fetch a record and 
> > > then
> > > access the struct members, or call each xen_vm_get_XXX method as 
> > > required. The
> > > latter methods in their current impl result in an XML-RPC roundtrip on 
> > > every
> > > call which seems rather undesirable. Given this performance overhead, 
> > > under 
> > > what circumstances would you anticipate one wanting to use the 
> > > xen_vm_get_XXX 
> > > methods at all - accessing struct members is just as easy & less overhead?
> > 
> > I can imagine you wanting to access some of the fields certainly --
> > Vm.memory_actual, for example, would let you cheaply monitor changes in VM
> > memory consumption.  Of course, no every field is going to be useful as an
> > individual get call, but it's not clear precisely which ones they are, and
> > they aren't expensive to support, so I thought that we may as well throw 
> > them
> > all in, for simplicity's sake.
> What I guess I was hinting at is that if getting a struct record for the VM
> object is preferred general access method, why have the complexity of having
> separate handle / struct objects? The 'xen_vm' handle could just be an opaque
> pointer to the xen_vm_record struct in the first place. The xm_vm_get_XXX 
> methods
> could then simply be wrappers doing local access to the struct members instead
> of doing RPC calls every time. It would make it clearer to app programmers
> how to use the API - they wouldn't have to worry about 'should I use the 
> xm_vm handle & getters' vs 'should i use xen_vm_record' because they'd be one
> and the same

The intention is that these _would_ be different things -- accessing a struct
accesses a potentially stale copy of the data, whereas hitting the server is
always up-to-date of course.  It's up to the application to decide between
these options, given the cost of the latter.

> > > Following on from this can you clarify how the apps are supposed to 
> > > approach
> > > error handling. The xen_session struct has an 'error_description' array of
> > > strings, but its unclear to me when one is supposed to check this? All the
> > > setter methods have a 'void' return type, and there is no explicit return
> > > code indicating errors for the getters (although you could check for NULL
> > > in some of them, others returning an enum aren't so clear), so how do youy
> > > tell whether a method succeeded or failed ?
> > 
> > The intention is that you check xen_session->ok before you use any of the
> > values returned by the library in your application, or before you make any
> > state-modifying calls on the server.  You can chain together a few calls 
> > into
> > the library if necessary, as once ok is set to false, subsequent calls won't
> > actually take place.  In many cases, this will degenerate to checking
> > xen_session->ok after each call into the library.
> This sounds like a very bad idea to me, because the error handling is totally
> non-obvious to programmers using the API. Even if we document that you should
> check 'xen_session->ok' after making any API call, the reality is that
> people just don't read docs. They will look at the header file and see that
> a method signature returns 'void' and assume that no errors will occur. It
> will actively encourage bad pratice / sloppy error handling amongst users of
> this API.
> Its basically akin to removing all the return codes from libc functions and
> saying you must check errno every time to see if something went wrong. Its
> just not going to happen. You will also loose the benefit of compiler warnings
> eg, the compiler can warn if you forget to check a return value - it can't
> say anything about forgetting to check a completely unrelated struct member
> like 'xen_session->ok'.
> Given a choice between an API :
>   int memory;
>   memory = xen_vm_get_memory(vm);
>   if (!session->ok)
>      ...do error stuff...
>   xen_vm_set_vcpus(vm, 3);
>   if (!session->ok)
>     ...do errror stuff...
> Or
>   int memory;
>   if (!(memory = xen_vm_get_memory(vm)))
>      ...do error stuff...
>   if (!xen_vm_set_vcpus(vm))
>      ...do error stuff...
> The latter is a pretty obvious error handling approach to C, while the former
> is just plain odd - you're checking a struct which isn't even the same as
> the object you just operated on. By all means have the full error code / 
> details
> in the xen_session struct, but the methods really need to provide a boolean
> success  / fail return value. Looking at the APIs thus far this shouldn't be 
> all that difficult - those returning a pointer can provide NULL for failure,
> while those returning an int can return '0' or '-1' as appropriate for their
> semantics.

It's not that odd -- it's an approach I've seen used in a number of libraries
before -- xmlrpc-c most recently, but also with CORBA bindings that I've used
in the past.  This approach has two advantages: you can chain calls together
without having to clutter the code with error checking, and when you return a
value, you don't need to decide which value is to be your error code.

We already have to pass in a session object -- to give the session ID and
receive back the error message -- so it makes sense to me to use this for the
success/failure flag too.  This way, you can guarantee that chained calls
following a failure don't do anything, because the binding can inspect the
session object.

> > On that subject, the intention with the error_description array is that the
> > first element is a code, and any subsequent elements are the parameters to 
> > the
> > error message.  You would look up the code in your internationalised message
> > database, and then substitute the parameters in appropriately.  We still 
> > need
> > to define those codes and the parameters that match -- I've just done this 
> > on
> > an ad-hoc basis in the library at the moment.
> If we're going to define formal error code as the first element, could we go
> one step further and actually make it into a integer constant / enum element.
> This would simplify checking by the client code allowing use of integer 
> comparison,
> switch statements, etc for error handling, rather than requiring the use of
> strcmp.

Yes, we certainly should.  I'll put that on the list.

> > > Finally, on the much discussed topic of resource utilization stats. From 
> > > previous discussions I was under the impression the APIs for fetching
> > > these stats were going to be separated out from the APIs for fetching info
> > > about an object's configuration / hardware properties. The xen_vm_record
> > > xen_vbd_record and xen_vif_record structs all still contain the members
> > > for utilization stats.  What is the plan for providing APIs to efficiently
> > > retrieve these stats in a single call ? eg a xen_vm_stats() or some such
> > > API call.
> > 
> > Yes, something like a xen_vm_stats() call is what I had in mind, though 
> > there
> > might be more than one, depending upon the kind of monitoring that you want 
> > to
> > do.  Any ideas on that front?
> Well, run state (blocked, running, crashed, idle, etc) domain CPU time, 
> memory allocation, # of virt CPUs. Perhaps a per-CPU breakdown of run
> state, time & host CPU scheduled.
> Might want to have some flags to the xen_vm_stats() method to indicate whether
> to also pull out disk & network device IO stats at the same time.

Cool, thanks.


xen-api mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.