[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] x86/xsaves: fix overwriting between non-lazy/lazy xsave[sc]



On Mon, Feb 29, 2016 at 02:33:49AM -0700, Jan Beulich wrote:
> > Thanks. 
> > 
> > Ok , I will do the performace test. And can you suggest me some 
> > workload/benchmark 
> > can be used here to the xsave related performance test ?
> 
> Measuring just instruction execution time should be fine for the
> purpose here, I think.
> 
I do the test as follow:

Xsave time measure method:
use function get_s_time, before/after xsave to get the difference. 

1. only enable xsaveopt in xen, start two guest(no workload running in
guest). 

xsave time land into two ranges [28 - 40] and [110 - 140](most in
110-120)

2. only enable xsavec in xen
xsave time land into two ranges [30 - 50] and [120 - 140]

I use a fragment of the test result and do a rough estimation. 

And I also try to get the average execution time (calculate very 10 xsave
times), the result is alomst the same as above.

The result can show xsaveopt has better performance than xsavec ( if the
test/measure method is corret ).

> > Other thing is from the text above, I guess that the best way to solve 
> > xsave[cs]
> > problem is:
> > 1. use xsaveopt instead of xsave[cs] nowdays.
> > 2. use xsaves whenever a component can only be saved that way( when some
> >    supervised components are supported in xen).
> > 3. no xsavec support.
> > 4. xsavec/xsaves feature will expose guest os if point 2 is ok.
> 
> Provided this results in better performance than the alternative(s),
> yes.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.