[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Odd CPU Scheduling Behavior

  • To: "Carb, Brian A" <Brian.Carb@xxxxxxxxxx>
  • From: Emmanuel Ackaouy <ackaouy@xxxxxxxxx>
  • Date: Thu, 29 Mar 2007 17:42:04 +0200
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 29 Mar 2007 16:45:47 +0100
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:in-reply-to:references:mime-version:content-type:message-id:content-transfer-encoding:cc:from:subject:date:to:x-mailer; b=JoHQAe6OpVE5NNEsd30V2KusXVMWGrjrjPogy1QrWxypcFyMrp/mCI8vQIC0IomtOGwBAo/03XT4H0vhJ4Di0w0qNeWNEZ7P8ucureI4LfUqPKIK7ceur0MmIRCPbYQivN8OJMQy2Zq87Jl2CzCjrr5bsU3eitz9/A1czbCR/NY=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

There is no gang scheduling in Xen so what you see is not unexpected.
Both VCPUs of the same VM are as likely to run on the same physical
CPU than not. For each VM though, both its VCPUs should get equal
CPU time if they are runnable even if they alternatively run on the same
physical CPU.

I have seen some multithreaded applications/libraries back off using
execution vehicles (processes) to schedule a runnable thread when
it doesn't seem to make forward progress, probably because some
code somewhere assumes another process is hogging the CPU and
it's therefore better to lower the number of execution vehicles. In this
case, multithreaded apps running on a 2CPU guest on Xen sometimes
only schedule work on 1CPU when there is another VM competing
for the physical CPU resources.

Are both VCPUs of each VM making forward progress during your test?

On Mar 29, 2007, at 16:58, Carb, Brian A wrote:

We're seeing a cpu scheduling behavior in Xen and we're wondering if anyone can explain it.
We're running XEN 3.0.4 on a Unisys ES7000/one with 8 CPUs (4 dual-core sockets) and 32GB memory. XEN is built on SLES10, and the system is booted with dom0_mem=512mb. We have 2 para-virtual machines, each booted with 2 vcpus and 2GB memory, and each running SLES10 and Apache2 with worker multi-processing modules.
The vcpus of dom0, vm1 and vm2 are pinned as follows:
dom0 is relegated to 2 vcpus (xm vcpu-set 0 2) and these are pinned to cpus 0-1
vm1 uses 2 vcpus pinned to cpus 2-3
vm2 uses 2 vcpus pinned to cpus 2-3
The cpus 4 through 7 are left unused.
Our test runs http_load against the Apache2 web servers in the 2 vms. Since Apache2 is using worker multi-processing modules, we expect that each vm will spread its load over the 2 vcpus, and during the test we have verified this using top and sar inside a vm console.
The odd behavior occurs when we monitor cpu usage using xenmon in interactive mode. By pressing "c", we can observe the load on each of the cpus. When we examine cpus 2 and 3 initially, each is used equally by vm1 and vm2. However, shortly after we start our testing, cpu2 runs vm1 exclusively 100% of the time, and cpu3 runs vm2 100% of the time.  When the test completes, CPUs 2 and 3 go back to sharing the load of vm1 and vm2.
Is this the expected behavior?

brian carb
unisys corporation - malvern, pa_______________________________________________
Xen-devel mailing list

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.