[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Too much VCPUS makes domU high CPU utiliazation



Although I still not figure out why VCPU fall either on even or odd PCPUS only , If I explictly set "VCPU=[4~15]" in HVM configuration, VM will use all PCPUS from 4 to 15.
Also I may find the reason why guest boot so slow.
 
I think the reason is  the Number of Guest VCPU   >   the  Number of physical CPUs that the Guest can run on
In my test,  my physical has 16 PCPUS and dom0 takes 4, so for every Guest, only 12 Physical CPUs are available.
 
So, if Guest has 16 VCPUS, and only 12 Physical are available, when heavy load, there will be two or more VCPUS are queued
on one Physical CPU, and if there exists VCPU is waiting for other other VCPUS respone(such as IPI memssage), the waiting time
would be much longer.
 
Especially, during Guest running time, if a process inside Guest takes 16 threads to run, then it is possible each VCPU owns one
thread, under physical,  those VCPUs still queue on PCPUS,  if there is some busy waiting code process, such as (spinlock),  
it will make Guest high CPU utilization.  If the the busy waiting code is not so frequently, we might see CPU utilization jump to
very high and drop to low now and then.
 
Could it be possible?
 
Many thanks.

 
> From: kevin.tian@xxxxxxxxx
> To: tinnycloud@xxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx
> CC: george.dunlap@xxxxxxxxxxxxx
> Date: Fri, 20 May 2011 07:29:55 +0800
> Subject: RE: [Xen-devel] Too much VCPUS makes domU high CPU utiliazation
>
> >From: MaoXiaoyun [mailto:tinnycloud@xxxxxxxxxxx]
> >Sent: Friday, May 20, 2011 12:24 AM
> >>
> >> does same thing happen if you launch B/C/D after A?
> >>
> > 
> >From test aspects, not really, all domains CPU Util are low.
> >It looks like this only happen in the process of domain A booting, and result in quite long time to boot.
>
> One possible reason is the lock contention at the boot time. If the lock holder
> is preempted unexpectedly and lock waiter continues to consume cycles, the
> boot progress could be slow.
>
> Which kernel version are you using? Does a different kern el version expose
> same problem?
>
> > 
> >One thing to correct, today even I destory B/C/D, domain A still comsume 800% CPU for quite a long time till now I am writing
> >this mail.
>
> So this phenomenon is intermittent? In your earlier mail A is back to normal
> after you destroyed B/C/D, and this time the slowness continues.
>
> >Another strang thing is seems all VCPUS of domainUA, are running only on even number Physicals CPUS(that is 0, 2,4...),
> >where explains where CPU Util is 800%.  But I don't understand whether this is designed to.
>
> Are there any hard limitation you added from the configuration file? Or do you
> observe this weird affinity all the time or occasionally? It looks strange to me
> that scheduler will consistently keep such affinity if you don't assign it explicitly.
>
> you may want to run xenoprof to sampling do main A to see its hot points, or
> use xentrace to trace VM-exit events to deduce from another angle.
>
> Thanks
> Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.