[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ongoing/future speculative mitigation work

  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Tamas K Lengyel <tamas.k.lengyel@xxxxxxxxx>
  • From: George Dunlap <george.dunlap@xxxxxxxxxx>
  • Date: Fri, 26 Oct 2018 11:11:18 +0100
  • Autocrypt: addr=george.dunlap@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFPqG+MBEACwPYTQpHepyshcufo0dVmqxDo917iWPslB8lauFxVf4WZtGvQSsKStHJSj 92Qkxp4CH2DwudI8qpVbnWCXsZxodDWac9c3PordLwz5/XL41LevEoM3NWRm5TNgJ3ckPA+J K5OfSK04QtmwSHFP3G/SXDJpGs+oDJgASta2AOl9vPV+t3xG6xyfa2NMGn9wmEvvVMD44Z7R W3RhZPn/NEZ5gaJhIUMgTChGwwWDOX0YPY19vcy5fT4bTIxvoZsLOkLSGoZb/jHIzkAAznug Q7PPeZJ1kXpbW9EHHaUHiCD9C87dMyty0N3TmWfp0VvBCaw32yFtM9jUgB7UVneoZUMUKeHA fgIXhJ7I7JFmw3J0PjGLxCLHf2Q5JOD8jeEXpdxugqF7B/fWYYmyIgwKutiGZeoPhl9c/7RE Bf6f9Qv4AtQoJwtLw6+5pDXsTD5q/GwhPjt7ohF7aQZTMMHhZuS52/izKhDzIufl6uiqUBge 0lqG+/ViLKwCkxHDREuSUTtfjRc9/AoAt2V2HOfgKORSCjFC1eI0+8UMxlfdq2z1AAchinU0 eSkRpX2An3CPEjgGFmu2Je4a/R/Kd6nGU8AFaE8ta0oq5BSFDRYdcKchw4TSxetkG6iUtqOO ZFS7VAdF00eqFJNQpi6IUQryhnrOByw+zSobqlOPUO7XC5fjnwARAQABzSRHZW9yZ2UgVy4g RHVubGFwIDxkdW5sYXBnQHVtaWNoLmVkdT7CwYAEEwEKACoCGwMFCwkIBwMFFQoJCAsFFgID AQACHgECF4ACGQEFAlpk2IEFCQo9I54ACgkQpjY8MQWQtG1A1BAAnc0oX3+M/jyv4j/ESJTO U2JhuWUWV6NFuzU10pUmMqpgQtiVEVU2QbCvTcZS1U/S6bqAUoiWQreDMSSgGH3a3BmRNi8n HKtarJqyK81aERM2HrjYkC1ZlRYG+jS8oWzzQrCQiTwn3eFLJrHjqowTbwahoiMw/nJ+OrZO /VXLfNeaxA5GF6emwgbpshwaUtESQ/MC5hFAFmUBZKAxp9CXG2ZhTP6ROV4fwhpnHaz8z+BT NQz8YwA4gkmFJbDUA9I0Cm9D/EZscrCGMeaVvcyldbMhWS+aH8nbqv6brhgbJEQS22eKCZDD J/ng5ea25QnS0fqu3bMrH39tDqeh7rVnt8Yu/YgOwc3XmgzmAhIDyzSinYEWJ1FkOVpIbGl9 uR6seRsfJmUK84KCScjkBhMKTOixWgNEQ/zTcLUsfTh6KQdLTn083Q5aFxWOIal2hiy9UyqR VQydowXy4Xx58rqvZjuYzdGDdAUlZ+D2O3Jp28ez5SikA/ZaaoGI9S1VWvQsQdzNfD2D+xfL qfd9yv7gko9eTJzv5zFr2MedtRb/nCrMTnvLkwNX4abB5+19JGneeRU4jy7yDYAhUXcI/waS /hHioT9MOjMh+DoLCgeZJYaOcgQdORY/IclLiLq4yFnG+4Ocft8igp79dbYYHkAkmC9te/2x Kq9nEd0Hg288EO/OwE0EVFq6vQEIAO2idItaUEplEemV2Q9mBA8YmtgckdLmaE0uzdDWL9To 1PL+qdNe7tBXKOfkKI7v32fe0nB4aecRlQJOZMWQRQ0+KLyXdJyHkq9221sHzcxsdcGs7X3c 17ep9zASq+wIYqAdZvr7pN9a3nVHZ4W7bzezuNDAvn4EpOf/o0RsWNyDlT6KECs1DuzOdRqD oOMJfYmtx9hMzqBoTdr6U20/KgnC/dmWWcJAUZXaAFp+3NYRCkk7k939VaUpoY519CeLrymd Vdke66KCiWBQXMkgtMGvGk5gLQLy4H3KXvpXoDrYKgysy7jeOccxI8owoiOdtbfM8TTDyWPR Ygjzb9LApA8AEQEAAcLBZQQYAQoADwIbDAUCWmTXMwUJB+tP9gAKCRCmNjwxBZC0bb+2D/9h jn1k5WcRHlu19WGuH6q0Kgm1LRT7PnnSz904igHNElMB5a7wRjw5kdNwU3sRm2nnmHeOJH8k Yj2Hn1QgX5SqQsysWTHWOEseGeoXydx9zZZkt3oQJM+9NV1VjK0bOXwqhiQyEUWz5/9l467F S/k4FJ5CHNRumvhLa0l2HEEu5pxq463HQZHDt4YE/9Y74eXOnYCB4nrYxQD/GSXEZvWryEWr eDoaFqzq1TKtzHhFgQG7yFUEepxLRUUtYsEpT6Rks2l4LCqG3hVD0URFIiTyuxJx3VC2Ta4L H3hxQtiaIpuXqq2D4z63h6vCx2wxfZc/WRHGbr4NAlB81l35Q/UHyMocVuYLj0llF0rwU4Aj iKZ5qWNSEdvEpL43fTvZYxQhDCjQTKbb38omu5P4kOf1HT7s+kmQKRtiLBlqHzK17D4K/180 ADw7a3gnmr5RumcZP3NGSSZA6jP5vNqQpNu4gqrPFWNQKQcW8HBiYFgq6SoLQQWbRxJDHvTR YJ2ms7oCe870gh4D1wFFqTLeyXiVqjddENGNaP8ZlCDw6EU82N8Bn5LXKjR1GWo2UK3CjrkH pTt3YYZvrhS2MO2EYEcWjyu6LALF/lS6z6LKeQZ+t9AdQUcILlrx9IxqXv6GvAoBLJY1jjGB q+/kRPrWXpoaQn7FXWGfMqU+NkY9enyrlw==
  • Cc: mpohlack@xxxxxxxxx, Julien Grall <julien.grall@xxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, joao.m.martins@xxxxxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Daniel Kiper <daniel.kiper@xxxxxxxxxx>, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, aliguori@xxxxxxxxxx, uwed@xxxxxxxxx, Lars Kurth <lars.kurth@xxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, ross.philipson@xxxxxxxxxx, Dario Faggioli <dfaggioli@xxxxxxxx>, Matt Wilson <msw@xxxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, JGross@xxxxxxxx, sergey.dyasli@xxxxxxxxxx, Wei Liu <wei.liu2@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxx>, mdontu <mdontu@xxxxxxxxxxxxxxx>, dwmw@xxxxxxxxxxxx, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 26 Oct 2018 10:11:29 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 10/25/2018 07:13 PM, Andrew Cooper wrote:
> On 25/10/18 18:58, Tamas K Lengyel wrote:
>> On Thu, Oct 25, 2018 at 11:43 AM Andrew Cooper
>> <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 25/10/18 18:35, Tamas K Lengyel wrote:
>>>> On Thu, Oct 25, 2018 at 11:02 AM George Dunlap <george.dunlap@xxxxxxxxxx> 
>>>> wrote:
>>>>> On 10/25/2018 05:55 PM, Andrew Cooper wrote:
>>>>>> On 24/10/18 16:24, Tamas K Lengyel wrote:
>>>>>>>> A solution to this issue was proposed, whereby Xen synchronises 
>>>>>>>> siblings
>>>>>>>> on vmexit/entry, so we are never executing code in two different
>>>>>>>> privilege levels.  Getting this working would make it safe to continue
>>>>>>>> using hyperthreading even in the presence of L1TF.  Obviously, its 
>>>>>>>> going
>>>>>>>> to come in perf hit, but compared to disabling hyperthreading, all its
>>>>>>>> got to do is beat a 60% perf hit to make it the preferable option for
>>>>>>>> making your system L1TF-proof.
>>>>>>> Could you shed some light what tests were done where that 60%
>>>>>>> performance hit was observed? We have performed intensive stress-tests
>>>>>>> to confirm this but according to our findings turning off
>>>>>>> hyper-threading is actually improving performance on all machines we
>>>>>>> tested thus far.
>>>>>> Aggregate inter and intra host disk and network throughput, which is a
>>>>>> reasonable approximation of a load of webserver VM's on a single
>>>>>> physical server.  Small packet IO was hit worst, as it has a very high
>>>>>> vcpu context switch rate between dom0 and domU.  Disabling HT means you
>>>>>> have half the number of logical cores to schedule on, which doubles the
>>>>>> mean time to next timeslice.
>>>>>> In principle, for a fully optimised workload, HT gets you ~30% extra due
>>>>>> to increased utilisation of the pipeline functional units.  Some
>>>>>> resources are statically partitioned, while some are competitively
>>>>>> shared, and its now been well proven that actions on one thread can have
>>>>>> a large effect on others.
>>>>>> Two arbitrary vcpus are not an optimised workload.  If the perf
>>>>>> improvement you get from not competing in the pipeline is greater than
>>>>>> the perf loss from Xen's reduced capability to schedule, then disabling
>>>>>> HT would be an improvement.  I can certainly believe that this might be
>>>>>> the case for Qubes style workloads where you are probably not very
>>>>>> overprovisioned, and you probably don't have long running IO and CPU
>>>>>> bound tasks in the VMs.
>>>>> As another data point, I think it was MSCI who said they always disabled
>>>>> hyperthreading, because they also found that their workloads ran slower
>>>>> with HT than without.  Presumably they were doing massive number
>>>>> crunching, such that each thread was waiting on the ALU a significant
>>>>> portion of the time anyway; at which point the superscalar scheduling
>>>>> and/or reduction in cache efficiency would have brought performance from
>>>>> "no benefit" down to "negative benefit".
>>>> Thanks for the insights. Indeed, we are primarily concerned with
>>>> performance of Qubes-style workloads which may range from
>>>> no-oversubscription to heavily oversubscribed. It's not a workload we
>>>> can predict or optimize before-hand, so we are looking for a default
>>>> that would be 1) safe and 2) performant in the most general case
>>>> possible.
>>> So long as you've got the XSA-273 patches, you should be able to park
>>> and re-reactivate hyperthreads using `xen-hptool cpu-{online,offline} $CPU`.
>>> You should be able to effectively change hyperthreading configuration at
>>> runtime.  It's not quite the same as changing it in the BIOS, but from a
>>> competition of pipeline resources, it should be good enough.
>> Thanks, indeed that is a handy tool to have. We often can't disable
>> hyperthreading in the BIOS anyway because most BIOS' don't allow you
>> to do that when TXT is used.
> Hmm - that's an odd restriction.  I don't immediately see why such a
> restriction would be necessary.
>> That said, with this tool we still
>> require some way to determine when to do parking/reactivation of
>> hyperthreads. We could certainly park hyperthreads when we see the
>> system is being oversubscribed in terms of number of vCPUs being
>> active, but for real optimization we would have to understand the
>> workloads running within the VMs if I understand correctly?
> TBH, I'd perhaps start with an admin control which lets them switch
> between the two modes, and some instructions on how/why they might want
> to try switching.
> Trying to second-guess the best HT setting automatically is most likely
> going to be a lost cause.  It will be system specific as to whether the
> same workload is better with or without HT.

There may be hardware-specific performance counters that could be used
to detect when pathological cases are happening.  But that would need to
be implemented and/or re-verified on basically every new piece of hardware.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.