|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen: credit2: enable per cpu runqueue creation
On Tue, 2017-04-11 at 21:45 +0530, Praveen Kumar wrote:
> The patch introduces a new command line option 'cpu' that when used
> will create
> runqueue per logical pCPU. This may be useful for small systems, and
> also for
> development, performance evalution and comparison.
>
> Signed-off-by: Praveen Kumar <kpraveen.lkml@xxxxxxxxx>
> Reviewed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
>
Hey Geoge,
I don't see this patch in staging, nor I think you've commented on it.
IIRC, it was sent very close to feature freeze... So, is it possible
that it fell through some crack? :-)
Any thoughts about it? If not, what about applying? :-D
Thanks and Regards,
Dario
> ---
> docs/misc/xen-command-line.markdown | 3 ++-
> xen/common/sched_credit2.c | 15 +++++++++++----
> 2 files changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-
> command-line.markdown
> index 5815d87dab..6e73766574 100644
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -525,7 +525,7 @@ also slow in responding to load changes.
> The default value of `1 sec` is rather long.
>
> ### credit2\_runqueue
> -> `= core | socket | node | all`
> +> `= cpu | core | socket | node | all`
>
> > Default: `socket`
>
> @@ -536,6 +536,7 @@ balancing (for instance, it will deal better with
> hyperthreading),
> but also more overhead.
>
> Available alternatives, with their meaning, are:
> +* `cpu`: one runqueue per each logical pCPUs of the host;
> * `core`: one runqueue per each physical core of the host;
> * `socket`: one runqueue per each physical socket (which often,
> but not always, matches a NUMA node) of the host;
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index bb1c657e76..ee7b443f9e 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -301,6 +301,9 @@ integer_param("credit2_balance_over",
> opt_overload_balance_tolerance);
> * want that to happen basing on topology. At the moment, it is
> possible
> * to choose to arrange runqueues to be:
> *
> + * - per-cpu: meaning that there will be one runqueue per logical
> cpu. This
> + * will happen when if the opt_runqueue parameter is set
> to 'cpu'.
> + *
> * - per-core: meaning that there will be one runqueue per each
> physical
> * core of the host. This will happen if the
> opt_runqueue
> * parameter is set to 'core';
> @@ -322,11 +325,13 @@ integer_param("credit2_balance_over",
> opt_overload_balance_tolerance);
> * either the same physical core, the same physical socket, the same
> NUMA
> * node, or just all of them, will be put together to form
> runqueues.
> */
> -#define OPT_RUNQUEUE_CORE 0
> -#define OPT_RUNQUEUE_SOCKET 1
> -#define OPT_RUNQUEUE_NODE 2
> -#define OPT_RUNQUEUE_ALL 3
> +#define OPT_RUNQUEUE_CPU 0
> +#define OPT_RUNQUEUE_CORE 1
> +#define OPT_RUNQUEUE_SOCKET 2
> +#define OPT_RUNQUEUE_NODE 3
> +#define OPT_RUNQUEUE_ALL 4
> static const char *const opt_runqueue_str[] = {
> + [OPT_RUNQUEUE_CPU] = "cpu",
> [OPT_RUNQUEUE_CORE] = "core",
> [OPT_RUNQUEUE_SOCKET] = "socket",
> [OPT_RUNQUEUE_NODE] = "node",
> @@ -682,6 +687,8 @@ cpu_to_runqueue(struct csched2_private *prv,
> unsigned int cpu)
> BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID ||
> cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID);
>
> + if (opt_runqueue == OPT_RUNQUEUE_CPU)
> + continue;
> if ( opt_runqueue == OPT_RUNQUEUE_ALL ||
> (opt_runqueue == OPT_RUNQUEUE_CORE &&
> same_core(peer_cpu, cpu)) ||
> (opt_runqueue == OPT_RUNQUEUE_SOCKET &&
> same_socket(peer_cpu, cpu)) ||
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |