[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen: merge temporary vcpu pinning scenarios


  • To: Juergen Gross <jgross@xxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Tue, 23 Jul 2019 16:55:28 +0100
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, IanJackson <ian.jackson@xxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 23 Jul 2019 15:55:48 +0000
  • Ironport-sdr: kRETBI6O1s9b8ctnIR+IrFWCi8E52g3BZ5JZFrcrT7PWEMBhq27Wby/6NDDTZcXfD1MW3bH8C4 cLxH0O9Nno7XIHKUQ1aevJAEtzBkZZnmWLx964p63VmP1oMtKM30dLSWivLu0kLldQNyaypq8d uy9rHE2zDAX7y+r8r+HZE48OTXkBELT0qSEj91GPaAz1yVVYzeNb5LghLpO2bXwKze2xtuWfPt TQN74EEcLrVgPrwnPJUZntvvpehHwAZFW3hV6v+drSWszuLOmDF4DPCy+5+pr3sYI/GjZ7sh6N nuQ=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 23/07/2019 16:53, Juergen Gross wrote:
> On 23.07.19 17:29, Andrew Cooper wrote:
>> On 23/07/2019 16:22, Juergen Gross wrote:
>>> On 23.07.19 17:04, Jan Beulich wrote:
>>>> On 23.07.2019 16:29, Juergen Gross wrote:
>>>>> On 23.07.19 16:14, Jan Beulich wrote:
>>>>>> On 23.07.2019 16:03, Jan Beulich wrote:
>>>>>>> On 23.07.2019 15:44, Juergen Gross wrote:
>>>>>>>> On 23.07.19 14:42, Jan Beulich wrote:
>>>>>>>>> v->processor gets latched into st->processor before raising the
>>>>>>>>> softirq,
>>>>>>>>> but can't the vCPU be moved elsewhere by the time the softirq
>>>>>>>>> handler
>>>>>>>>> actually gains control? If that's not possible (and if it's not
>>>>>>>>> obvious
>>>>>>>>> why, and as you can see it's not obvious to me), then I think a
>>>>>>>>> code
>>>>>>>>> comment wants to be added there.
>>>>>>>>
>>>>>>>> You are right, it might be possible for the vcpu to move around.
>>>>>>>>
>>>>>>>> OTOH is it really important to run the target vcpu exactly on the
>>>>>>>> cpu
>>>>>>>> it is executing (or has last executed) at the time the NMI/MCE is
>>>>>>>> being
>>>>>>>> queued? This is in no way related to the cpu the MCE or NMI has
>>>>>>>> been
>>>>>>>> happening on. It is just a random cpu, and so it would be if we'd
>>>>>>>> do the
>>>>>>>> cpu selection when the softirq handler is running.
>>>>>>>>
>>>>>>>> One question to understand the idea nehind all that: _why_ is the
>>>>>>>> vcpu
>>>>>>>> pinned until it does an iret? I could understand if it would be
>>>>>>>> pinned
>>>>>>>> to the cpu where the NMI/MCE was happening, but this is not the
>>>>>>>> case.
>>>>>>>
>>>>>>> Then it was never finished or got broken, I would guess.
>>>>>>
>>>>>> Oh, no. The #MC side use has gone away in 3a91769d6e, without
>>>>>> cleaning
>>>>>> up other code. So there doesn't seem to be any such requirement
>>>>>> anymore.
>>>>>
>>>>> So just to be sure: you are fine for me removing the pinning for
>>>>> NMIs?
>>>>
>>>> No, not the pinning as a whole. The forced CPU0 affinity should still
>>>> remain. It's just that there's no correlation anymore between the CPU
>>>> a vCPU was running on and the CPU it is to be pinned to (temporarily).
>>>
>>> I don't get it. Today vcpu0 of the hardware domain is pinned to the cpu
>>> it was last running on when the NMI happened. Why is that important?
>>> Or do you want to change the logic and pin vcpu0 for NMI handling
>>> always
>>> to CPU0?
>>
>> Its (allegedly) for when dom0 knows some system-specific way of getting
>> extra information out of the platform, that happens to be core-specific.
>>
>> There are rare cases where SMI's need to be executed on CPU0, and I
>> wouldn't put it past hardware designers to have similar aspects for
>> NMIs.
>
> Understood. But today vcpu0 is _not_ bound to CPU0, but to any cpu it
> happened to run on.
>
>>
>> That said, as soon as the gaping security hole which is the
>> default-readibility of all MSRs, I bet the utility of this pinning
>> mechanism will be 0.
>
> And my reasoning is that this is the case today already, as there is
> no pinning to CPU0 done, at least not on purpose.

Based on this analysis, I'd be tempted to drop the pinning completely. 
It clearly isn't working in a rational way.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.