[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Pthreads Overhead


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Bent Masriya <bentmasriya@xxxxxxxxx>
  • Date: Fri, 23 Oct 2009 14:59:55 -0400
  • Delivery-date: Fri, 23 Oct 2009 12:00:50 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=ogl5sgON8g4AUM7kyS+Ggx4hKf0hyE/0PYBbT56MQjd+vkBXoDw+wUl2popPD3OE9t 2YYCsVFP9CYVhbCv1vX8kD2A5EeRWOHHtRt1h0GzsEhU2bI7Les6iAKghYVV7aOotTjJ dfmubiY42x1nTBad8+eXW/ogqJeA43ptzMBl8=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi all,
I have been seeing a considerable (2x-3x) overhead when using Posix pthreads and pthreads mutux synchronization on Xen dom0. I am not sure if this is an inherent overhead of Xen or a wrong configuration on my side.

Basically,  I run a simple benchmark to time pthread_create and pthread_mutex_lock/unlock for 200 threads and the overhead I am measuring for xen dom0 is  1.5x-2.9x relative to the non-xenified (native) performance. Can someone please confirm if they have seen similar overhead and/or shed some light on the source of this overhead

I am using 2.6.26-2-xen-amd64Xen with eight VCPU, and 8 physical cores/ 2 sockets and comparing it to Linux 2.6.18.8 SMP on the same machine.  This is Xen 3.2 with a credit scheduler. I am seeing this overhead for both HVM and paravirtualized Xen. Please, advise.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.