[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 12/16] libxl: get and set soft affinity



On 14/11/13 15:11, Ian Jackson wrote:
Dario Faggioli writes ("[PATCH v2 12/16] libxl: get and set soft affinity"):
which basically means making space for a new cpumap in both
vcpu_info (for getting soft affinity) and build_info (for setting
it), along with providing the get/set functions, and wiring them
to the proper xc calls. Interface is as follows:

  * libxl_{get,set}_vcpuaffinity() deals with hard affinity, as it
    always has happened;
  * libxl_get,set}_vcpuaffinity_soft() deals with soft affinity.
In practice, doesn't this API plus these warnings mean that a
toolstack which wants to migrate a domain to a new set of vcpus (or,
worse, a new cpupool) will find it difficult to avoid warnings from
libxl ?

Because the toolstack will want to change both the hard and soft
affinities to new maps, perhaps entirely disjoint from the old ones,
but can only do one at a time.  So the system will definitely go
through "silly" states.

This would be solved with an API that permitted setting both sets of
affinities in a single call, even if the underlying libxc and
hypercalls are separate, because libxl would do the check only on the
overall final state.

So perhaps we should have a singe function that can change the
cpupool, hard affinity, and soft affinity, all at once ?

I think this is probably a good idea. Would it make sense to basically have libxl_[gs]et_vcpuaffinity2(), which takes a parameter that can specify that the mask is for either hard, soft, or both?


What if the application makes a call to change the cpupool, without
touching the affinities ?  Should the affinities be reset
automatically ?

I think whatever happens for hard affinities right now should be carried on.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.