[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/7] sysctl: Make topologyinfo and numainfo sysctls a little more efficient



On Mon, Feb 09, 2015 at 03:04:31PM -0500, Boris Ostrovsky wrote:
> Currently both of these sysctls make a copy to userspace for each index of
> various query arrays. We should try to copy whole arrays instead.
> 
> This requires some changes in sysctl's public data structure, thus bump
> interface version.
> 
> Report topology for all present (not just online) cpus.
> 
> Rename xen_sysctl_topologyinfo and XEN_SYSCTL_topologyinfo to reflect the fact
> that these are used for CPU topology. Subsequent patch will add support for
> PCI topology sysctl.
> 
> Clarify some somments in sysctl.h.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> ---
>  tools/libxc/include/xenctrl.h     |    4 +-
>  tools/libxc/xc_misc.c             |   10 ++--
>  tools/libxl/libxl.c               |   71 +++++++------------
>  tools/misc/xenpm.c                |   69 +++++++-----------
>  tools/python/xen/lowlevel/xc/xc.c |   77 ++++++++------------

Are these mostly mechanical changes? I'm assuming yes.

>  xen/common/sysctl.c               |  141 
> ++++++++++++++++++++++---------------
>  xen/include/public/sysctl.h       |   75 ++++++++++++--------
>  7 files changed, 221 insertions(+), 226 deletions(-)
[...]
> -        (ctx->xch, node_dists, sizeof(*node_dists) * max_nodes * max_nodes);
> -    if ((memsize == NULL) || (memfree == NULL) || (node_dists == NULL)) {
> +    meminfo = xc_hypercall_buffer_alloc(ctx->xch, meminfo, sizeof(*meminfo) 
> * max_nodes);
> +    distance = xc_hypercall_buffer_alloc(ctx->xch, distance, 
> sizeof(*distance) * max_nodes * max_nodes);

Please wrap these two lines to <80 column.

> +    if ((meminfo == NULL) || (distance == NULL)) {
>          LIBXL__LOG_ERRNOVAL(ctx, XTL_ERROR, ENOMEM,
>                              "Unable to allocate hypercall arguments");
[...]
> -    set_xen_guest_handle(tinfo.cpu_to_core, coremap);
> -    set_xen_guest_handle(tinfo.cpu_to_socket, socketmap);
> -    set_xen_guest_handle(tinfo.cpu_to_node, nodemap);
> +    cputopo = xc_hypercall_buffer_alloc(self->xc_handle, cputopo, 
> sizeof(*cputopo) * (MAX_CPU_INDEX+1));

Line too long.

> +    if ( cputopo == NULL )
> +     goto out;
> +    set_xen_guest_handle(tinfo.cputopo, cputopo);
>      tinfo.max_cpu_index = MAX_CPU_INDEX;
[...]
> -        goto out;
> -    node_memfree = xc_hypercall_buffer_alloc(self->xc_handle, node_memfree, 
> sizeof(*node_memfree)*(MAX_NODE_INDEX+1));
> -    if ( node_memfree == NULL )
> +    meminfo = xc_hypercall_buffer_alloc(self->xc_handle, meminfo, 
> sizeof(*meminfo) * (MAX_NODE_INDEX+1));

Ditto.

> +    if ( meminfo == NULL )
>          goto out;
> -    nodes_dist = xc_hypercall_buffer_alloc(self->xc_handle, nodes_dist, 
> sizeof(*nodes_dist)*(MAX_NODE_INDEX+1)*(MAX_NODE_INDEX+1));
> -    if ( nodes_dist == NULL )
> +    distance = xc_hypercall_buffer_alloc(self->xc_handle, distance, 
> sizeof(*distance)*(MAX_NODE_INDEX+1)*(MAX_NODE_INDEX+1));

Ditto.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.