[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Re: Xen scheduler



 

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Emmanuel Ackaouy
> Sent: 24 April 2007 15:35
> To: pak333@xxxxxxxxxxx
> Cc: ncmike@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Re: Xen scheduler
> 
> On Apr 23, 2007, at 21:33, pak333@xxxxxxxxxxx wrote:
> > Basically all my questions boil down to this: Does the 
> Scheduler know 
> > about the pcpu layout(same socket) and does it do any 
> scheduling based 
> > on that.
> 
> Yes but not how you suggested.
> 
> The scheduler actually tries to schedule VCPUs across multiple
> sockets before it "co-schedules" a socket. The idea behind this is
> to maximize the achievable memory bandwidth.
> 
> On hyperthreaded systems, the scheduler will also attempt to
> schedule across cores before it co-schedules hyperthreads. This
> is to maximize achievable cycles.
> 
> At this time, no attempt is made to schedule 2 VCPUs of the
> same VM any differently than 2 VCPUs of distinct VMs.
> 
> If you feel two VCPUs would do better co-scheduled on a
> core or socket, you'd currently have to use cpumasks -- as
> mike suggested -- to restrict where they can run manually. I'd
> be curious to know of real world cases where doing this
> increases performance significantly.

If you have data-sharing between the apps on the same socket, and a shared L2 
or L3 cache, and the application/data fits in the cache, I could see that it 
would help. [And of course, the OS for example will have some data and 
code-sharing between CPU's - so some application where a lot of time is spent 
in the OS itself would be benefitting from "socket sharing"]. 

For other applications, having better memory bandwitch is most likely better. 

Of course, for ideal performance, it would also have to be taken into account 
which CPU owns the memory being used, as the latency of transferring memory 
from one CPU to another in a NUMA system can affect the performance quite 
noticably.

--
Mats
> 
> Hope this helps.
> 
> Cheers,
> Emmanuel.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.