[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen VMs and Unixbench: single vs multiple cpu behaviour

On Wed, 2015-11-25 at 10:54 +0100, Dario Faggioli wrote:
> On Tue, 2015-11-24 at 23:41 +0100, Dario Faggioli wrote:
> The fact that performance improves when doing that, makes me thinking
> that the issue is related to how the Xen's scheduler handles the fact
> that the Linux's scheduler migrates stuff between vCPUs in the
> specific
> way this happens during some of the tests above.
> I have heard similar reports, so I'll keep investigating. I've got
> theories, but I'd like to collect a few more date before drawing
> conclusions... Next step will be tracing some of the tests.
I've got some more numbers and results, that I'll polish and post.

I've found what, inside the guest, is the responsible of the scheduling
decisions bringing to this situation (a scheduling domain flag called
SD_BALANCE_FORK), and I'll explain it when posting the numbers.

I'll now go check why the behavior enabled by that flag irritates Xen's
scheduler so much... In the meantime, in addition to this:

> In the meantime, Marko, if you're still up for it, can you try these
> two commands, in your 4 vCPUs VM, and report here the results?
> From the UnixBench directory:
> Â#Â./Run -c 1 spawn
> Â# schedtool -a 1 -e ./Run -c 1 spawn
Can I see the output of (from inside the guest):

# ls /proc/sys/kernel/sched_domain/cpu0/
# ls /proc/sys/kernel/sched_domain/cpu0/domain0
# cat # ls /proc/sys/kernel/sched_domain/cpu0/domain0/name
# cat # ls /proc/sys/kernel/sched_domain/cpu0/domain0/flags

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.