[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ongoing/future speculative mitigation work



On Thu, 2018-10-25 at 10:25 -0600, Tamas K Lengyel wrote:
> On Thu, Oct 25, 2018 at 10:01 AM Dario Faggioli <dfaggioli@xxxxxxxx>
> wrote:
> > 
> > Which is indeed very interesting. But, as we're discussing in the
> > other
> > thread, I would, in your case, do some more measurements, varying
> > the
> > configuration of the system, in order to be absolutely sure you are
> > not
> > hitting some bug or anomaly.
> 
> Sure, I would be happy to repeat tests that were done in the past to
> see whether they are still holding. We have run this test with Xen
> 4.10, 4.11 and 4.12-unstable on laptops and desktops, using credit1
> and credit2, and it is consistent that hyperthreading yields the
> worst
> performance. 
>
So, just to be clear, I'm not saying it's impossible to find a workload
for which HT is detrimental. Quite the opposite. And these benchmarks
you're running might well fall into that category.

I'm just suggesting to double check that. :-)

> It varies between platforms but it's around 10-40%
> performance hit with hyperthread on. This test we do is a very CPU
> intensive test where we heavily oversubscribe the system. But I don't
> think it would be all that unusual to run into such a setup in the
> real world from time-to-time.
> 
Ah, ok, so you're _heavily_ oversubscribing...

So, I don't think that an heavily oversubscribed host, where all vCPUs
would want to run 100% CPU intensive activities --and this not being
some transient situation-- is that common. And for the ones for which
it is, there is not much we can do, hyperthreading or not.

In any case, hyperthreading works best when the workload is mixed,
where it helps making sure that IO-bound tasks have enough chances to
file a lot of IO requests, without conflicting too much with the CPU-
bound tasks doing their number/logic crunching.

Having _everyone_ wanting to do actual stuff on the CPUs is, IMO, one
of the worst workloads for hyperthreading, and it is in fact a workload
where I've always seen it having the least beneficial effect on
performance. I guess it's possible that, in your case, it's actually
really doing more harm than good.

It's an interesting data point, but I wouldn't use a workload like that
to measure the benefit, or the impact, of an SMT related change.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.