[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ongoing/future speculative mitigation work



On Fri, 2018-10-26 at 06:01 -0600, Tamas K Lengyel wrote:
> On Fri, Oct 26, 2018, 1:49 AM Dario Faggioli <dfaggioli@xxxxxxxx>
> wrote:
> > 
> > I haven't done this kind of benchmark yet, but I'd say that, if
> > every
> > vCPU of every domain is doing 100% CPU intensive work, core-
> > scheduling
> > isn't going to make much difference, or help you much, as compared
> > to
> > regular scheduling with hyperthreading enabled.
> 
> Understood, we actually went into the this with the assumption that
> in such cases core-scheduling would underperform plain credit1. 
>
Which may actually happen. Or it might improve things a little, because
there are higher chances that a core only has 1 thread busy. But then
we're not really benchmarking core-scheduling vs. plain-scheduling,
we're benchmarking a side-effect of core-scheduling, which is not
equally interesting.

> The idea was to measure the worst case with plain scheduling and with
> core-scheduling to be able to see the difference clearly between the
> two.
> 
For the sake of benchmarking core-scheduling solutions, we should put
ourself in a position where what we measure is actually its own impact,
and I don't think this very workload put us there.

Then, of course, if this workload is relevant to you, you indeed have
the right and should benchmark and evaluate it, and we're always
interested in hearing what you find out. :-)

> > Actual numbers may vary depending on whether VMs have odd or even
> > number of vCPUs but, e.g., on hardware with 2 threads per core, and
> > using VMs with at least 2 vCPUs each, the _perfect_ implementation
> > of
> > core-scheduling would still manage to keep all the *threads* busy,
> > which is --as far as our speculations currently go-- what is
> > causing
> > the performance degradation you're seeing.
> > 
> > So, again, if it is confirmed that this workload of yours is a
> > particularly bad one for SMT, then you are just better off
> > disabling
> > hyperthreading. And, no, I don't think such a situation is common
> > enough to say "let's disable for everyone by default".
> 
> I wasn't asking to make it the default in Xen but if we make it the
> default for our deployment where such workloads are entirely
> possible, would that be reasonable. 
>
It all comes to how common a situation where you have a massively
oversubscribed system, with a fully CPU-bound workload, for significant
chunks of time.

As said in a previous email, I think that, if this is common enough,
and it is not something just transient, you'll are in trouble anyway.
And if it's not causing you/your customers troubles already, it might
not be that common, and hence it wouldn't be necessary/wise to disable
SMT.

But of course, you know your workload, and your requirements, much more
than me. If this kind of load really is what you experience, or what
you want to target, then yes, apparently disabling SMT is your best way
to go.

> If there are
> tests that I can run which are the "best case" for hyperthreading, I
> would like to repeat those tests to see where we are.
> 
If we come up with a good enough synthetic benchmark, I'll let you
know.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.