[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 for Xen 4.6 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler



If no more feedbacks, let me summarize the design for the next version.

For "get" operations, we will implement the following features:

1) Use " xl sched-rtds -v all " to output the per-dom parameters of
all domains. And use, e.g., " xl sched-rtds -d vm1 -v all ", to output
the per-dom parameters of one specific domain. When a domain (say vm1)
has vcpus with different scheduling parameters but meanwhile the user
uses "xl sched-rtds -d vm1 -v all " to show the per-dom parameters,
the actual output result is just the parameters of vcpu with ID=0
(which is pointless, and should be made clear to the users).

These two kinds of "get" operations would be implemented through
libxl_domain_sched_params_get() and other domain-related functions (no
changes to all these functions).

2) For example, use " xl sched-rtds -d vm1 -v 0 -v 2 -v 4 " to show
the per-vcpu parameters of vcpu"0", vcpu"2" and vcpu"4" of vm1.

This kind of "get" operation would be implemented through
libxl_vcpu_sched_params_get() and other newly-added vcpu-related
functions.


For "set" operations, no new feature is added, against patch v2.

We need some new data structures to support per-vcpu operations (for
all schedulers, not just RTDS).

1) In libxl, we will introduce:

libxl_vcpu_sched_params = Struct("vcpu_sched_params",[
    ("vcpuid",       integer, { xxx some init val xxx}),
    ("weight",       integer, {'init_val': 'LIBXL_PARAM_WEIGHT_DEFAULT'}),
    ("cap",          integer, {'init_val': 'LIBXL_PARAM_CAP_DEFAULT'}),
    ("period",       integer, {'init_val': 'LIBXL_PARAM_PERIOD_DEFAULT'}),
    ("slice",        integer, {'init_val': 'LIBXL_PARAM_SLICE_DEFAULT'}),
    ("latency",      integer, {'init_val': 'LIBXL_PARAM_LATENCY_DEFAULT'}),
    ("extratime",    integer, {'init_val': 'LIBXL_PARAM_EXTRATIME_DEFAULT'}),
    ("budget",       integer, {'init_val': 'LIBXL_PARAM_BUDGET_DEFAULT'}),
    ])

libxl_sched_params = Struct("sched_params",[
    ("sched",        libxl_scheduler),
    ("vcpus",        Array(libxl_sched_params, "num_vcpus")),
    ])

and use libxl_sched_params to store and transfer vcpu array with
parameters to change/output.

2) In xen, we will introduce:

struct xen_domctl_scheduler_op {
    uint32_t sched_id;  /* XEN_SCHEDULER_* */
    uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_* */
    union {
        xen_domctl_schedparam_t d;
        struct {
            XEN_GUEST_HANDLE_64(xen_domctl_schedparam_vcpu_t) vcpus;
            uint16_t nr_vcpus;
        } v;
    } u;
};
typedef struct xen_domctl_scheduler_op xen_domctl_scheduler_op_t;
DEFINE_XEN_GUEST_HANDLE(xen_domctl_scheduler_op_t);

and some others (details can be found in
http://www.gossamer-threads.com/lists/xen/devel/380726?do=post_view_threaded
). Because of this new xen_domctl_scheduler_op_t, some changes have to
be done for credit and credit2 schedulers (for the
XEN_DOMCTL_scheduler_op processing there).

Please correct me if something is wrong.

Thanks,
Chong

On Tue, Jun 9, 2015 at 11:18 AM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> On Mon, 2015-06-08 at 15:55 -0500, Chong Li wrote:
>> On Mon, Jun 8, 2015 at 10:56 AM, Dario Faggioli
>
>> > So, Thoughts? What do you think the best way forward could be?
>>
>> I like option 2 more. But I think we may also need a 'vcpuid' field in
>> libxl_sched_params.
>>
> For sparse array support, yes. At which point, I would flip the names as
> well, i.e., something like this:
>
> libxl_vcpu_sched_params = Struct("vcpu_sched_params",[
>     ("vcpuid",       integer, { xxx some init val xxx}),
>     ("weight",       integer, {'init_val': 'LIBXL_PARAM_WEIGHT_DEFAULT'}),
>     ("cap",          integer, {'init_val': 'LIBXL_PARAM_CAP_DEFAULT'}),
>     ("period",       integer, {'init_val': 'LIBXL_PARAM_PERIOD_DEFAULT'}),
>     ("slice",        integer, {'init_val': 'LIBXL_PARAM_SLICE_DEFAULT'}),
>     ("latency",      integer, {'init_val': 'LIBXL_PARAM_LATENCY_DEFAULT'}),
>     ("extratime",    integer, {'init_val': 'LIBXL_PARAM_EXTRATIME_DEFAULT'}),
>     ("budget",       integer, {'init_val': 'LIBXL_PARAM_BUDGET_DEFAULT'}),
>     ])
>
> libxl_sched_params = Struct("sched_params",[
>     ("sched",        libxl_scheduler),
>     ("vcpus",        Array(libxl_sched_params, "num_vcpus")),
>     ])
>
> With the possibility of naming the latter 'libxl_vcpus_sched_params',
> which is more descriptive, but perhaps is too similar to
> libxl_vcpu_sched_params.
>
> Ian, George, what do you think?
>
> While we're here, another thing we would appreciate some feedback on is
> what should happen to libxl_domain_sched_params_get(). This occurred to
> my mind while reviewing patch 4 of this series. Actually, I think we've
> discussed this before, but can't find the reference now.
>
> Anyway, my view is that, for a scheduler that uses per-vcpu parameters,
> libxl_domain_sched_params_set() should set the same parameters for all
> the vcpus.
> When it comes to _get(), however, I'm not sure. To match the _set()
> case, we'd need to return the parameters of all the vcpus, but we can't,
> because the function takes a libxl_domain_sched_params argument, which
> just holds 1 tuple.
>
> Should we just WARN and ask, when on that specific scheduler, to use the
> per-vcpu variant being introduced in this patch
> (libxl_vcpu_sched_params_get())?
>
> This does not look ideal, but without changing the prototype of
> libxl_domain_sched_params_get(), I don't see what else sensible we could
> do... :-/
>
> Should we change it, and do the LIBXL_API_VERSION "trick"?
>
> So, again, thoughts?
>
> Regards,
> Dario
>
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.