[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Poor HVM performance with 8 vcpus



Hi Juergen,

Tim Deegan is the man for this stuff (cc'ed) - you don't want to get too
involved in the shadow code without syncing with him first. My
understanding, however, is that shadow code is currently designed with
scalability up to only about 4 VCPUs in mind. The expectation is that, as
users want to scale wider than that, they will typically be upgrading to
modern many-core processors with hardware assistance (Intel EPT, AMD NPT).

If you don't fit into that scenario, perhaps we can find you some
lowish-hanging fruit to improve parallelism. Big changes in shadow code
could be scary for us due to the likely nasty bug tail!

 -- Keir

On 07/10/2009 07:55, "Juergen Gross" <juergen.gross@xxxxxxxxxxxxxx> wrote:

> Hi,
> 
> we've got massive performance problems running a 8 vcpu HVM-guest (BS2000)
> under XEN (xen 3.3.1).
> 
> With a specific benchmark producing a rather high load on memory management
> operations (lots of process creation/deletion and memory allocation) the 8
> vcpu performance was worse than the 4 vcpu performance. On other platforms
> (/390, MIPS, SPARC) this benchmark scaled rather well with the number of cpus.
> 
> The result of the usage of the software performance counters of XEN seemed
> to point to the shadow lock being the reason. I modified the Hypervisor to
> gather some lock statistics (patch will be sent soon) and found that the
> shadow lock is really the bottleneck. On average 4 vcpus are waiting to get
> the lock!
> 
> Is this a known issue?
> Is there a chance to split the shadow lock into sub-locks or to use a
> reader/writer lock instead?
> I just wanted to ask before trying to understand all of the shadow code :-)
> 
> 
> Juergen



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.