[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 1/4] xen: add real time scheduler rt



Hi George,


2014-09-03 9:40 GMT-04:00 George Dunlap <George.Dunlap@xxxxxxxxxxxxx>:
On Sun, Aug 24, 2014 at 11:58 PM, Meng Xu <mengxu@xxxxxxxxxxxxx> wrote:
> This scheduler follows the pre-emptive Global EDF theory in real-time field.
> Each VCPU can have a dedicated period and budget.
> While scheduled, a VCPU burns its budget.
> A VCPU has its budget replenished at the beginning of each of its periods;
> The VCPU discards its unused budget at the end of each of its periods.
> If a VCPU runs out of budget in a period, it has to wait until next period.
> The mechanism of how to burn a VCPU's budget depends on the server mechanism
> implemented for each VCPU.
>
> Server mechanism: a VCPU is implemented as a deferable server.
> When a VCPU is scheduled to execute on a PCPU, its budget is continuously
> burned.
>
> Priority scheme: Preemptive Global Earliest Deadline First (gEDF).
> At any scheduling point, the VCPU with earliest deadline has highest
> priority.
>
> Queue scheme: A global Runqueue for each CPU pool.
> The Runqueue holds all runnable VCPUs.
> VCPUs in the Runqueue are divided into two parts: with and without budget.
> At each part, VCPUs are sorted based on gEDF priority scheme.
>
> Scheduling quantum: 1 ms;
>
> Note: cpumask and cpupool is supported.
>
> This is still in the development phase.

You should probably take this out now that you've removed the RFC. :-)


âditched now. Thanks!â

Â
I'm just doing a first pass, so just a few quick comments to begin with.

âThank you very much for your review! :-)â


> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index 55503e0..7d2c6d1 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -69,6 +69,7 @@ static const struct scheduler *schedulers[] = {
>Â Â Â &sched_credit_def,
>Â Â Â &sched_credit2_def,
>Â Â Â &sched_arinc653_def,
> +Â Â &sched_rt_def,
>Â };
>
>Â static struct scheduler __read_mostly ops;
> @@ -1090,7 +1091,8 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op)
>
>Â Â Â if ( (op->sched_id != DOM2OP(d)->sched_id) ||
>Â Â Â Â Â Â((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
> -Â Â Â Â Â (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
> +Â Â Â Â Â (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
> +Â Â Â Â Â (op->cmd != XEN_DOMCTL_SCHEDOP_getnumvcpus)) )

Why are you introducing this as a schedop? Isn't this information
already exposed in getdomaininfo?

âI introduce âXEN_DOMCTL_SCHEDOP_getnumvcpus as a schedop because we need to know the number of vcpus a domain has when the tool stack wants to display the parameters of EACH vcpu.Â

I think the operation you meant in getdomaininfo is XEN_DOMCTL_max_vcpus (in file xen/common/domctl.c)? If so, I think this operation is setting the max vcpus for a domain instead of getting the number of vcpus this domain has. Therefore, I don't think I can reuse the operation XEN_DOMCTL_max_vcpus in getdomaininfo.Â

The detailed reason of why I need to get the number of vcpus a domain has is as follows:
When the tool stack (command xl sched-rt -d domain) displays the parameters of EACH vcpu, the tool stack will allocate an array whose size is "sizeof(struct xen_domctl_sched_rt_params) * num_vcpus_of_this_domain" and bounce this array to the hypervisor. After hypervisor fills out the parameters of each vcpu, this array will be bounced out to tool stack to display to users.

In order to know how large this array should be, we need to know the number of vcpus this domain has.

Please let me know if you have any other concerns or questions. :-)

Thank you very much!

Best,

âMengâ


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.