[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] memory performance 20% degradation in DomU -- Sisu



On Wed, Mar 05, 2014 at 09:29:30PM +0000, Gordan Bobic wrote:
> Just out of interest, have you tried the same test with HVM DomU?
> The two have different characteristics, and IIRC for some workloads
> PV can be slower than HVM. The recent PVHVM work was intended to
> result in the best aspects of both, but that is more recent than Xen
> 4.3.0.
> 
> It is also interesting that your findings are approximately similar
> to mine, albeit with a very different testing methodology:
> 
> http://goo.gl/lIUk4y

Don't know if you used PV drivers (for HVM) and if you used as a backend a 
block device instead of a file.

But it also helps in using 'fio' to test this sort of thing.

> 
> Gordan
> 
> On 03/05/2014 08:09 PM, Sisu Xi wrote:
> >Hi, Konrad:
> >
> >It is the PV domU.
> >
> >Thanks.
> >
> >Sisu
> >
> >
> >On Wed, Mar 5, 2014 at 11:33 AM, Konrad Rzeszutek Wilk
> ><konrad.wilk@xxxxxxxxxx <mailto:konrad.wilk@xxxxxxxxxx>> wrote:
> >
> >    On Tue, Mar 04, 2014 at 05:00:46PM -0600, Sisu Xi wrote:
> >     > Hi, all:
> >     >
> >     > I also used the ramspeed to measure memory throughput.
> >     > http://alasir.com/software/ramspeed/
> >     >
> >     > I am using the v2.6, single core version. The command I used is
> >    ./ramspeed
> >     > -b 3 (for int) and ./ramspeed -b 6 (for float).
> >     > The benchmark measures four operations: add, copy, scale, and
> >    triad. And
> >     > also gives an average number for all four operations.
> >     >
> >     > The results in DomU shows around 20% performance degradation
> >    compared to
> >     > non-virt results.
> >
> >    What kind of domU? PV or HVM?
> >     >
> >     > Attached is the results. The left part are results for int, while
> >    the right
> >     > part is the results for float. The Y axis is the measured
> >    throughput. Each
> >     > box contains 100 experiment repeats.
> >     > The black boxes are the results in non-virtualized environment,
> >    while the
> >     > blue ones are the results I got in DomU.
> >     >
> >     > The Xen version I am using is 4.3.0, 64bit.
> >     >
> >     > Thanks very much!
> >     >
> >     > Sisu
> >     >
> >     >
> >     >
> >     > On Tue, Mar 4, 2014 at 4:49 PM, Sisu Xi <xisisu@xxxxxxxxx
> >    <mailto:xisisu@xxxxxxxxx>> wrote:
> >     >
> >     > > Hi, all:
> >     > >
> >     > > I am trying to study the cache/memory performance under Xen,
> >    and has
> >     > > encountered some problems.
> >     > >
> >     > > My machine is has an Intel Core i7 X980 processor with 6
> >    physical cores. I
> >     > > disabled hyper-threading, frequency scaling, so it should be
> >    running at
> >     > > constant speed.
> >     > > Dom0 was boot with 1 VCPU pinned to 1 core, with 2 GB of memory.
> >     > >
> >     > > After that, I boot up DomU with 1 VCPU pinned to a separate
> >    core, with 1
> >     > > GB of memory. The credit scheduler is used, and no cap is set
> >    for them. So
> >     > > DomU should be able to access all resources.
> >     > >
> >     > > Each physical core has a 32KB dedicated L1 cache, 256KB
> >    dedicated L2
> >     > > cache. And all cores share a 12MB L3 cache.
> >     > >
> >     > > I created a simple program to create an array of specified
> >    size. Load them
> >     > > once, and then randomly access every cache line once. (1 cache
> >    line is 64B
> >     > > on my machine).
> >     > > rdtsc is used to record the duration for the random access.
> >     > >
> >     > > I tried different data sizes, with 1000 repeat for each data sizes.
> >     > > Attached is the boxplot for average access time for one cache line.
> >     > >
> >     > > The x axis is the different Data Size, the y axis is the CPU
> >    cycle. The
> >     > > three vertical lines at 32KB, 256KB, and 12MB represents the size
> >     > > difference in L1, L2, and L3 cache on my machine.
> >     > > *The black box are the results I got when I run it in
> >    non-virtualized,
> >     > > while the blue box are the results I got in DomU.*
> >     > >
> >     > > For some reason, the results in DomU varies much more than the
> >    results in
> >     > > non-virtualized environment.
> >     > > I also repeated the same experiments in DomU with Run Level 1,
> >    the results
> >     > > are the same.
> >     > >
> >     > > Can anyone give some suggestions about what might be the reason
> >    for this?
> >     > >
> >     > > Thanks very much!
> >     > >
> >     > > Sisu
> >     > >
> >     > > --
> >     > > Sisu Xi, PhD Candidate
> >     > >
> >     > > http://www.cse.wustl.edu/~xis/
> >     > > Department of Computer Science and Engineering
> >     > > Campus Box 1045
> >     > > Washington University in St. Louis
> >     > > One Brookings Drive
> >     > > St. Louis, MO 63130
> >     > >
> >     >
> >     >
> >     >
> >     > --
> >     > Sisu Xi, PhD Candidate
> >     >
> >     > http://www.cse.wustl.edu/~xis/
> >     > Department of Computer Science and Engineering
> >     > Campus Box 1045
> >     > Washington University in St. Louis
> >     > One Brookings Drive
> >     > St. Louis, MO 63130
> >
> >
> >     > _______________________________________________
> >     > Xen-devel mailing list
> >     > Xen-devel@xxxxxxxxxxxxx <mailto:Xen-devel@xxxxxxxxxxxxx>
> >     > http://lists.xen.org/xen-devel
> >
> >
> >
> >
> >--
> >Sisu Xi, PhD Candidate
> >
> >http://www.cse.wustl.edu/~xis/
> >Department of Computer Science and Engineering
> >Campus Box 1045
> >Washington University in St. Louis
> >One Brookings Drive
> >St. Louis, MO 63130
> >
> >
> >_______________________________________________
> >Xen-devel mailing list
> >Xen-devel@xxxxxxxxxxxxx
> >http://lists.xen.org/xen-devel
> >
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.