[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 10/10] libxl: fix caller of libxl_cpupool functions



On Thu, 2015-07-16 at 11:30 +0200, Juergen Gross wrote:
> On 07/16/2015 10:57 AM, Dario Faggioli wrote:
> > [Adding Juergen]
> 
> Near miss. You took my old mail address. :-)
> 
Sorry for this! That's what I have in the MUA... Apparently it's never
got updated as the most of the mail I sent to you go via `stg mail'! :-P

> > On Wed, 2015-07-15 at 18:16 +0100, Wei Liu wrote:

> >> I think I need to overhaul cpupool_info a bit if we want to make this
> >> API better.
> >>
> > Well, perhaps having cpupool_info() treating specially the situation
> > where the pool ID is not there may help... However, what would you do
> > once you have this additional piece of information available?
> >
> > Maybe, depending on the error, we can cleanup the whole array? Is it
> > this that we are after?
> 
> I think the best would be:
> 
> Modify cpupool_info() to return either success, internal error, or no
> cpupool found.
> 
> In case of internal error libxl_list_cpupool() should clean up and
> return NULL.
>
Agreed.

> > Sorry, what you mean by 'interleave pool ids'?
> 
> Not sure if this is an answer to interleaving of pool ids, but it is
> possible to specify the pool id when creating a new cpupool at the
> libxc interface. Even in case this is not used, pool ids can easily
> be sparse after having deleted a cpupool.
> 
Exactly, but that should not be a problem.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.