[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxc/x86: avoid overflow in CPUID APIC ID adjustments


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 20 Sep 2019 13:40:17 +0100
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Juergen Gross <jgross@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 20 Sep 2019 12:40:42 +0000
  • Ironport-sdr: H5Rm1pubbKaDdPjQxH/D2IPNkYjvWaneTvCDoi8CylnCa7/eUrSnF7hubSdzsQzpBH9TNZBjz0 qA0asod3BFD5ebUqIvvCsKEPTCrQ7atsRtUn5ZFI/BnJL/vJ0j+LD116V8OkFlaQhsu0n+za/H ZgH175v8pnqOY1HLtBK8uvKtvEO3QsB/PsFDpASRslmMG8+MueRFP3w1VBqXshqyGc+hnjD+J9 OArUEm0HhomLAflJS2epojw+jmoyzrsM8z7us5MEipE10FcrsOUmQibcz2w8+CQpFsoB+2WPpt Vbg=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 20/09/2019 11:20, Jan Beulich wrote:
> On 20.09.2019 12:05, Andrew Cooper wrote:
>> On 19/09/2019 12:48, Jan Beulich wrote:
>>> Recent AMD processors may report up to 128 logical processors in CPUID
>>> leaf 1. Doubling this value produces 0 (which OSes sincerely dislike),
>>> as the respective field is only 8 bits wide. Suppress doubling the value
>>> (and its leaf 0x80000008 counterpart) in such a case.
>>>
>>> Additionally don't even do any adjustment when the host topology already
>>> includes room for multiple threads per core.
>>>
>>> Furthermore don't double the Maximum Cores Per Package at all - by us
>>> introducing a fake HTT effect, the core count doesn't need to change.
>>> Instead adjust the Maximum Logical Processors Sharing Cache field, which
>>> so far was zapped altogether.
>>>
>>> Also zap leaf 4 (and at the same time leaf 2) EDX output for AMD.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>> ---
>>> TBD: Using xc_physinfo() output here needs a better solution. The
>>>      threads_per_core value returned is the count of active siblings of
>>>      CPU 0, rather than a system wide applicable value (and constant
>>>      over the entire session). Using CPUID output (leaves 4 and
>>>      8000001e) doesn't look viable either, due to this not really being
>>>      the host values on PVH. Judging from the host feature set's HTT
>>>      flag also wouldn't tell us whether there actually are multiple
>>>      threads per core.
>> The key thing is that htt != "more than one thread per core".  HTT is
>> strictly a bit indicating that topology information is available in a
>> new form in the CPUID leaves (or in AMDs case, the same information
>> should be interpreted in a new way).  Just because HTT is set (and it
>> does get set in non-HT capable systems), doesn't mean there is space for
>> more than thread per core in topology information.
>>
>> For PV guests, my adjustment in the CPUID series shows (what I believe
>> to be) the only correct way of propagating the host HTT/CMP_LEGACY
>> settings through.
>>
>> For HVM guests, it really shouldn't really have anything to do with the
>> host setting.  We should be choosing how many threads/core to give to
>> the guest, then constructing the topology information from first principles.
>>
>> Ignore the PVH case.  It is totally broken for several other reasons as
>> well, and PVH Dom0 isn't a production-ready thing yet.
>>
>> This gets us back to the PV case where the host information is actually
>> in view, and (for backport purposes) can be trusted.
> Okay, this means I'll revive and finish the half cpuid() based attempt
> I had made initially. A fundamental question remains open though from
> your reply: Do you agree with the idea of avoiding the multiplication
> by 2 if the host topology already provides at least one bit of thread
> ID within the APIC ID?

In theory, yes.  In practice, I'd err on the side of not.

A further problem with CPUID handling is that it is recalculated from
scratch even after migrate.  Therefore, any changes to the algorithm
will cause inconsistencies to be seen in the guest across
migrate/upgrade.  This problem becomes substantially worse if the patch
is backported to stable trees.

Now that get_cpu_policy has existed for a little while, and
set_cpu_policy is imminent, fixing the "CPUID changes across migrate"
problem is almost doable, and is on the plan for toolstack work.

That said, ultimately, anything "pre 4.14" => "4.14" is going to hit a
discontinuity, because there is information discarded on the source side
which can't be reconstructed on the destination.

Overall, I would suggest doing the absolute minimum change required to
unbreak Rome CPUs.  Everything further is going to cause differences
across migrate.

In 4.14, I think we can reasonably fix all of:
1) CPUID data discarded for migrate
2) domain builder uses native CPUID
3) topology handling isn't consistent with SDM/APM

all of which is libxc/libxl work, once set_cpu_policy() is in place.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.