[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Re: Xen scheduler



 

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> pak333@xxxxxxxxxxx
> Sent: 23 April 2007 20:34
> To: ncmike@xxxxxxxxxx
> Cc: Mike D. Day; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Re: Xen scheduler
> 
> Thanks. A little more clarification.
>  
> Here is an example. 
>  
> I have multiple VMs, each with 2 vcpus. There is no user 
> affinity. So i will let the vcpus run whereever the Xen 
> scheduler chooses. My system has 2 dual core sockets.
>  
>      If all 4 pcpus are idle, then will the scheduler assign 
> the vcpus of a VM to the same socket pcpus. 
>      If while running, 2 pcpus from different sockets become 
> available, the scheduler will assign 2 vcpus to those two 
> pcpus. Does the scheduler do any optimization as to moving 
> the vcpus of a vm to the same socket or just assign the vcpus 
> as they become ready.

Xen's current schedulers doesn't have any clue about CPU cores and their
relationship to sockets, memory locations or any other such things. So
if you get two VCPUS from the same domain or two different domains on
one of your physical dual core CPU is entirely random, and will remain
so at any time a VCPU is rescheduled. Whichever PCPU happens to be ready
when the VCPU is schedule will be used unless you specifically restrict
the VCPU to a (set of) PCPU(s). 
>  
> or if 3 pcpus are idle will the scheduler assign vcpus from a 
> VM to the same socket.

It will assign "any VCPU to any PCPU that is allowed for that VCPU", and
it doesn't really care which VM or which socket any particular VCPU/PCPU
combination belongs to. 
>  
> Basically all my questions boil down to this: Does the 
> Scheduler know about the pcpu layout(same socket) and does it 
> do any scheduling based on that.

Not at present. There's been some discussions on this, and whilst it's
easy to solve some of the obvious cases, there are also some harder nuts
to crack. What do you do when the system is really busy and there's not
a "good" PCPU to schedule a particular VCPU on - do you wait for the
PCPU that is ideal to become available, or do you schedule it on a less
ideal PCPU? How long do you allow the wait for that ideal PCPU?

Whilst it's easy to say "Just do it right", solving the rather hairy
problems of when there's congestion and making the right "judgement" of
the situation is much harder. 

--
Mats
>  
> Thanks
> Prabha
>  
>  
>  
>      
>  
>  
> 
>       -------------- Original message -------------- 
>       From: "Mike D. Day" <ncmike@xxxxxxxxxx> 
>       
>       > On 21/04/07 06:03 +0000, pak333@xxxxxxxxxxx wrote: 
>       > > 
>       > > Hi, 
>       > > 
>       > > 
>       > > 
>       > > On running on a dual/quad core does the Xen 
> scheduler take into 
>       > > account the physical layout of the cores. 
>       > > 
>       > > For example if a VM has two vcpus, and there are 4 
> physical cpus 
>       > > free, will it take care to assign the 2vcpus (from 
> a VM) to 2 pcpus 
>       > > on the same socket. 
>       > 
>       > 
>       > The scheduler only knows the affinity of vcpus for physical 
>       > cpus. The affinity is determined by a userspace 
> application and can 
>       > be modified using a domain control hypercall. Look in 
>       > xen/common/domctl.c around line 568 for the following: 
>       > 
>       > case XEN_DOMCTL_setvc puaffinity: 
>       > case XEN_DOMCTL_getvcpuaffinity: 
>       > 
>       > 
>       > 
>       > When the credit scheduler migrates a vcpu to a pcpu, 
> it only considers 
>       > pcpus for which the affinity bit is set. If the 
> userspace application 
>       > sets affinity such that only the bits set for pcpus 
> on the same 
>       > socket, then the vcpu will only run on pcpu's sharing 
> the same 
>       > socket. 
>       > 
>       > 
>       > Mike 
>       > 
>       > -- 
>       > Mike D. Day 
>       > IBM LTC 
>       > Cell: 919 412-3900 
>       > Sametime: ncmike@xxxxxxxxxx AIM: ncmikeday Yahoo: 
> ultra.runner 
>       > PGP key: http://www.ncultra.org/ncmike/pubkey.asc 
>       > 
>       > _______________________________________________ 
>       > Xen-devel mailing list 
>       > Xen-devel@xxxxxxxxxxxxxxxxxxx 
>       > http://lists.xensource.com/xen-devel 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.