[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] scheduler rate controller



On Mon, Nov 28, 2011 at 5:31 PM, Lv, Hui <hui.lv@xxxxxxxxx> wrote:
> Sorry for the late. I also met issues to boot up xen according to your patch, 
> which is same as credit_3.patch that I attached.

Thanks Hui; debugging someone else's buggy code is going beyond
expectations. :-)

> So I modified it to the credit_1.patch and credit_2.patch, both of which work 
> well.

Standard patches for OSS development need to be -p1, not -p0; they
need to work if, in the toplevel of the directory (foo.hg) you type
"patch -p1 < bar.patch". The easiest way to make one of these patches
is to use "hg diff"; the best way (if you're using Mercurial) is to
use Mercurual queues.

> 2) credit_2 adopts the constant sched_ratelimit_us value 1000.

Looks fine overall.  One issue with the patch is that it will not only
fail to preempt for a higher-priority vcpu, it will also fail to
preempt for tasklets.  Tasklet work must be done immediately.  Perhaps
we can add "!tasklet_work_scheduled" to the list of conditions for

Why did you change "MICROSECS" to "MILLISECS" when calculating
timeslice?  In this case, it will set the timeslice to a full second!
Not what we want...

>From a software maintenance perspective, I'm not a fan of early
returns from functions.  I think it's too easy not to notice that
there's a different return path.  In this case, I think I'd prefer
adding a label, and using "goto out;" instead.

> 1) credit_1 adopts "scheduling frequency counting" to decide the value of 
> sched_ratelimit_us, which makes it adaptive.

If you were using mercurial queues, you could put this after the last
one, and it would be easier to see the proposed "adaptive" part of the
code. :-)

Hypervisors are very complicated; it's best to keep things as
absolutely simple as possible.  This kind of mechanism is exactly the
sort of thing that makes it very hard to predict what will happen.  I
think unless you can show that it adds a significant benefit, it's
better just to use the min timeslice.

Regarding this particular code, a couple of items, just for feedback:
* All of the ratelimiting code and data structures should be in the
pluggable scheduler, not in common code.
* This code hard-codes in '1000' as the value it sets the global
variable to, overriding whatever the user may have entered on the
command-line
* Furthermore, global variable is shared by all of the cpus, however;
meaning, you may have one cpu enabling it one moment based on its own
conditions, and have nother cpu disabling it almost immediatly
afterwards, based on conditions on that cpu.  If you're testing with
this at the moment, you might as well stop -- you're going to get a
pretty random result.  If you really wanted this to be an adaptive
solution, you'd need to make a per-cpu variable with the per-cpu rate
scheduling; and then set it to the global variable (which is the user
configuration).
* Finally, this patch doesn't distinguish between schedules that
happen due to a yield, and schedules that happen due to preemption.
The only schedules we have any control over are schedules that happen
due to preemption.  If adaptivity has any value, it should pay
attention to what it can control.

I've taken your two patches, given them the proper formating, and made
them into a patch series (the first adding the ratelimiting, the
second adding the adaptive bit); they are attached.  You should be
able to easily pull them into a mercurial patchqueue using "hg qimport
/path/to/patches/*.diff".  In the future, I will not look at any
patches which do not apply using either "patch -p1" or "hg qimport."

Thanks again for all your work on this -- I hope we can get a simple,
effective solution in place soon.

 -George

Attachment: 01-credit-ratelimit.diff
Description: Text document

Attachment: 02-credit-adaptive.diff
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.