[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/cpuid: Untangle Invariant TSC handling


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 6 Mar 2020 17:48:59 +0000
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Anthony PERARD <anthony.perard@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 06 Mar 2020 17:49:11 +0000
  • Ironport-sdr: mdl/05PykU9QigvO32/Pn7xnLzGVHxjHN/GCtqliV4a8K3lYilUbmACrcSfvOVuewI6f8aSk4f WajxzU9aFyRbCks4U9w9Q78wplkTc/66M2A1z8r+K5Wf4WSGR6bXDDxjVHR7gKzFjEXe8le681 OqWI2jEubWIxafqhxw7au5aprUE012fQvtdrZ1hl14CS8eyeH5ysMJjj+dNGiO9LH9ku4yWmQ4 ZVWKrSrKMlc/UADyxptbmpWqClX7fNE+91cYqthVm/4sEnfP7/AzhxsDjDOQdPDz8EklHmIE6R dq4=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 05/03/2020 08:20, Jan Beulich wrote:
> On 04.03.2020 19:40, Andrew Cooper wrote:
>> On 04/03/2020 10:25, Jan Beulich wrote:
>>> On 03.03.2020 19:24, Andrew Cooper wrote:
>>>> ITSC being visible to the guest is currently implicit with the toolstack
>>>> unconditionally asking for it, and Xen clipping it based on the vTSC and/or
>>>> XEN_DOMCTL_disable_migrate settings.
>>>>
>>>> This is problematic for several reasons.
>>>>
>>>> First, the implicit vTSC behaviour manifests as a real bug on migration to 
>>>> a
>>>> host with a different frequency, with ITSC but without TSC scaling
>>>> capabilities, whereby the ITSC feature becomes advertised to the guest.  
>>>> ITSC
>>>> will disappear again if the guest migrates to server with the same 
>>>> frequency
>>>> as the original, or to one with TSC scaling support.
>>>>
>>>> Secondly, disallowing ITSC unless the guest doesn't migrate is conceptually
>>>> wrong.  It is common to have migration pools of identical hardware, at 
>>>> which
>>>> point the TSC frequency is the same,
>>> This statement is too broad: Pools of identical hardware may have the same
>>> nominal frequencies, but two distinct systems are hardly ever going to have
>>> the exact same actual (measured or even real) frequencies.
>> There is no such thing as truly invariant TSC.  Even with the best
>> hardware in the world, the reference frequency will change based on
>> physical properties of the surroundings, including things like ambient
>> temperature.  i.e. even a single server, sitting in a datacenter is
>> likely to see a fractional change in frequency across a 24h period.
>>
>> What matters is the error margins, and how long until it manifests as a
>> noticeable difference.
>>
>>> Recall Olaf's vTSC-tolerance patch that still hasn't landed anywhere?
>> This is a different problem.  Even on the same system, errors in Xen's
>> frequency calculations can differ by several hundred kHz (iirc), boot to
>> boot, making it quite useless for answering the question "am I running
>> at the frequency the guest saw before?", which is how we just whether to
>> intercept TSC accesses or not.
> But that's why I've said "too broad": Right now pools of identical
> hardware will not look to us as if they all had the same freq.

The statement is about the hardware.

Xen's (mis)measurements is just another bug in the mix, needing fixing,
and not related to the paragraph.

>> There are things which can be done about this, such as using frequency
>> data provided by the CPU directly (rather than correlating it with a
>> separate timesource).  At that point, the only difference between two
>> identical systems will be the variability in the reference clock, and
>> PLL circuitry which ultimately multiplies it up from 19.2/25/100 MHz to
>> the 1-3.5GHz typically encountered for core frequencies.
> Right. The question just is how large the error margin is from the
> nominal frequency reported via CPUID leaves 15/16 and the actual
> frequency. If it's no worse than the differences we observe from
> our "measurement", then yes, we could and perhaps should use that
> data if available.

I can't locate (even via backchannels) any written guarantee on error
margins, but from what I gather, it is in practice rather more accurate
than Xen's error margins.

CPUID leaves 15/16 are far from perfect - see the steady stream of
corrections making their way into Linux.  The most recent issue I saw
was that 15/16 has no compensation for overclocking settings in the
K-sku processors.  Either way, there are systems now in Linux where the
TSC is the sole clocksource, and the stability seems to be ok now.

In addition to the logic Linux currently uses, the TSC frequency can be
obtained for Nehalem thru Broadwell in a similar way to the existing
Atom logic, and for AMD, the TSC frequency can be obtained directly from
the P0 frequency control MSR, which is in the BKDG/PPR and available
from at least Fam10h onwards (and we really don't care about K8 these days).


If we end up with a measured TSC frequency which is very close to what
the model-specific logic thinks the actual TSC frequency is, then going
with the model specific version seems like a much better bet - in
particular, it should make most systems come in with a nice round number.

Obviously, the first step towards this is to build the model specific
logic and at least report it on boot, so we can then see what the
differences are in practice.

>>>> and more modern hardware has TSC scaling
>>>> support anyway.  In both cases, it is safe to advertise ITSC and migrate 
>>>> the
>>>> guest.
>>>>
>>>> Remove all implicit logic logic in Xen, and make ITSC part of the max CPUID
>>>> policies for guests.  Plumb an itsc parameter into xc_cpuid_apply_policy() 
>>>> and
>>>> have libxl__cpuid_legacy() fill in the two cases where it can reasonably
>>>> expect ITSC to be safe for the guest to see.
>>>>
>>>> This is a behaviour change for TSC_MODE_NATIVE, where the ITSC will now
>>>> reliably not appear, and for the case where the user explicitly requests 
>>>> ITSC,
>>>> in which case it will appear even if the guest isn't marked as nomigrate.
>>> How sensible is it to allow the user to request something like ITSC with
>>> no respective support underneath?
>> Right now, Xen will ignore ITSC if the hardware isn't capable, just like
>> any other missing feature flag.
>>
>> When we get the policy auditing logic in better shape, I intend to
>> reject requests which can't be fulfilled.
> Okay, good to know. I wonder though how well we'll be able to
> express in the eventual user visible error message which of
> the settings was actually refused.

That is still very much TBD, but even the current "There was some
problem with leaf $X, subleaf $Y and MSR $Z" is far better than nothing.

>>> Shouldn't we translate such a request
>>> into enabling vTSC if there's no ITSC on the platform?
>> No, because a) doing things implicitly like this is the root of far too
>> many bugs, this patch included, and b) it probably isn't what the user
>> wants.
>>
>> The reason to play around with TSC settings will ultimately to be try
>> and avoid intercepting RDTSC, because the performance hit from
>> interception dominates most other factors.
>>
>>> Actually looking
>>> at the change to libxl__cpuid_legacy() I wonder whether you don't instead
>>> mean "requests vTSC" here.
>> I don't see how you come to that conclusion.  It is two separate cases
>> where the toolstack can reasonably expect the guest-observed frequency
>> not to differ.
> Looking at this hunk

Ok.  There are ...

>
> @@ -432,7 +433,22 @@ void libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid,
>      if (info->type == LIBXL_DOMAIN_TYPE_HVM)
>          pae = libxl_defbool_val(info->u.hvm.pae);
>  
> -    xc_cpuid_apply_policy(ctx->xch, domid, NULL, 0, pae);
> +    /*
> +     * Advertising Invariant TSC to a guest means that the TSC frequency 
> won't
> +     * change at any point in the future.
> +     *
> +     * We do not have enough information about potential migration
> +     * destinations to know whether advertising ITSC is safe, but if the 
> guest
> +     * isn't going to migrate, then the current hardware is all that matters.

... 1, or ...

> +     *
> +     * Alternatively, an internal property of vTSC is that the values read 
> are
> +     * invariant.  Advertise ITSC when we know the domain will have emualted
> +     * TSC everywhere it goes.

... 2 orthogonal cases described, where xl/libxl in its current form can
determine that ITSC is safe to advertise.

> +     */
> +    itsc = (libxl_defbool_val(info->disable_migrate) ||
> +            info->tsc_mode == LIBXL_TSC_MODE_ALWAYS_EMULATE);
> +
> +    xc_cpuid_apply_policy(ctx->xch, domid, NULL, 0, pae, itsc);
>
> I see the check of ->tsc_mode, which aiui is a request to enable
> vTSC unconditionally.

vTSC in Xen is not !!tsc_mode.

In particular, libxl cannot (currently) determine whether
TSC_MODE_NATIVE will result in suitable invariant properties inside the
guest, because amongst other things, it depends on where the VM might
migrate to in the future.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.