[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 10/13] libxl/xl: deprecate the build_info->cpumap field



On Fri, Jun 13, 2014 at 03:10:28PM +0200, Dario Faggioli wrote:
[...]
> meaning we want vcpu 0 to be pinned to pcpu 3,4 and vcpu 1 to
> be pinned to pcpu 2,3,4,5,6. Before this change, in fact, the
> list variant (["xx", "yy"]) supported only single values.
> 
> BEWARE that, although still being there, for backward
> compatibility reasons, the cpumap field in build_info is no
> longer used anywhere in libxl.
> 

This paragraph needs update.

> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> ---
>  docs/man/xl.cfg.pod.5       |    8 +++---
>  tools/libxl/libxl_create.c  |    6 ----
>  tools/libxl/libxl_dom.c     |    4 +--
>  tools/libxl/libxl_types.idl |    6 ++++
>  tools/libxl/xl_cmdimpl.c    |   61 
> +++++++++++++++++--------------------------
>  5 files changed, 34 insertions(+), 51 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index c087cbc..af48622 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning 
> "all,^nodes:1"
>  results in all the vcpus of the guest running on all the cpus on the
>  host, except for the cpus belonging to the host NUMA node 1.
>  
> -=item ["2", "3"] (or [2, 3])
> +=item ["2", "3-8,^5"]
>  

What happens if I still have [2, 3] in my config? From the lexer's PoV
2 and "2" are different things.

> -To ask for specific vcpu mapping. That means (in this example), vcpu #0
> -of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
> -run on cpu #3 of the host.
> +To ask for specific vcpu mapping. That means (in this example), vcpu 0
> +of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
> +run on cpus 3,4,6,7,8 of the host.
>  
>  =back
>  
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index d015cf4..443fe7d 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -187,12 +187,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>      } else if (b_info->avail_vcpus.size > HVM_MAX_VCPUS)
>          return ERROR_FAIL;
>  
> -    if (!b_info->cpumap.size) {
> -        if (libxl_cpu_bitmap_alloc(CTX, &b_info->cpumap, 0))
> -            return ERROR_FAIL;
> -        libxl_bitmap_set_any(&b_info->cpumap);
> -    }
> -

As you retain libxl_set_vcpuaffinity_all(b_info->cpumap) you might also
want to retain this one. I believe you will figure this out. :-)

>      libxl_defbool_setdefault(&b_info->numa_placement, true);
>  
>      if (!b_info->nodemap.size) {
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 1767659..0b00470 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -250,7 +250,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>       * whatever that turns out to be.
>       */
>      if (libxl_defbool_val(info->numa_placement)) {
> -        if (!libxl_bitmap_is_full(&info->cpumap)) {
> +        if (d_config->b_info.num_vcpu_hard_affinity) {
>              LOG(ERROR, "Can run NUMA placement only if no vcpu "
>                         "affinity is specified");
>              return ERROR_INVAL;
> @@ -261,8 +261,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>              return rc;
>      }
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> -    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
> -                               &info->cpumap, NULL);
>  

This hunk, as I said in my other email, should stay.

>      /* If we have the vcpu hard affinity list, honour it */
>      if (d_config->b_info.num_vcpu_hard_affinity) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 05978d7..cd5c0d4 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -297,7 +297,11 @@ libxl_domain_sched_params = 
> Struct("domain_sched_params",[
>  libxl_domain_build_info = Struct("domain_build_info",[
>      ("max_vcpus",       integer),
>      ("avail_vcpus",     libxl_bitmap),
> -    ("cpumap",          libxl_bitmap),
> +    ("cpumap",          libxl_bitmap), # DEPRECATED!
> +    # The cpumap field above has been deprecated by the introduction of the
> +    # vcpu_hard_affinity array. It is no longer used anywhere in libxl code,
> +    # so one better avoid setting and, in general, using it at all. To do so,
> +    # is indeed harmless, but won't produce any actual effect on the domain.

This comments needs update: cpumap is still functional but
vcpu_hard_affinity takes precedence.

(I skip the parser part as it looks mostly like code motion to me)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.