[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH] scheduler: credit scheduler for client virtualization



Thank you for your suggestions.

George Dunlap wrote:
On Wed, Dec 3, 2008 at 9:16 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
Don't hack it into the existing sched_credit.c unless you are really sharing
significant amounts of stuff (which it looks like you aren't?).
sched_bcredit.c would be a cleaner name if there's no sharing. Is a new
scheduler necessary -- could the existing credit scheduler be generalised
with your boost mechanism to be suitable for both client and server?

I think we ought to be able to work this out; the functionality
doesn't sound that different, and as you say, keeping two schedulers
around is only an invitation to bitrot.

I had thought that the scheduler for client would be needed separately because this modification would influence a server workload. In order to minimize modifications, the bcredit scheduler was implemented by wrapping the current credit scheduler. I added the differences between original and bcredit. But as a result, almost functions were created newly.

Now, I agree that one scheduler is best.

The more accurate credit scheduling and vcpu credit "balancing" seem
like good ideas.  For the other changes, it's probably worth measuring
on a battery of tests to see what kinds of effects we get, especially
on network throughput.

I didn’t think about the battery and the performance.

Nishiguchi-san, (I hope that's right!) as I understood from your
presentation, you haven't tested this on a server workload, but you
predict that the "boost" scheduling of 2ms will cause unnecessary
overhead for server workloads.  Is that correct?

Yes, you are correct. I answered that in Q/A.

Couldn't we avoid the overhead this way:  If a vcpu has 5 or more
"boost" credits, we simply set the next-timer to 10ms.  If the vcpu
yields before then, we subtract the amount of "boost" credits actually
used.  If not, we subtract 5.  That way we're not interrupting any
more frequently than we were before.

I set the next-timer to 2ms in any vcpu having “boost” credits since every vcpu having “boost” credits need to be run equally at short intervals. If there are vcpus having “boost” credits and the next-timer of a vcpu is set to 10ms, the other vcpus will be waited during 10ms.

At present, I am thinking that if the other vcpus don’t have “boost” credits then we may set the next-timer to 30ms.


Come to think of it: won't the effect of setting the 'boost' time to
2ms be basically counteracted by giving domains boost credits?  I
thought the purpose reducing the boost time was to allow other domains
to run more quickly?  But if a domain has more than 5 'boost' credits,
it will run for a full 10 ms anyway.  Is that not so?

I suppose that there are two domains given “boost” credits. One domain runs for 2ms, then the other domain runs for 2ms, then one domain runs for 2ms, then the other domain runs for 2ms, … Because I think to need that waited time of both is same.

Could you test your video latency measurement with all the other
optimizations, but with the "boost" time set to 10ms instead of 2?  If
it works well, it's probably worth simply merging the bulk of your
changes in and testing with server workloads.

I tested the video latency measurement with the “boost” time set to 10ms. But it regretted not to work well. As I was mentioned above, the vcpu was occasionally waited during 10ms.

On my patch, “boost” time is tuneable. How about the default “boost” time is 30ms and if necessary, “boost” time is set? Is it acceptable?

In order to lengthen the “boost” time as much as possible, I will think about computing the length of the next-timer of the vcpu having “boost” credits.
I’ll try to revise the patch.

And thanks again.

Best regards,
Naoki Nishiguchi


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.