[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen: merge temporary vcpu pinning scenarios


  • To: Juergen Gross <JGross@xxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Tue, 23 Jul 2019 15:04:06 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P7vltMln1+TkLiSogUpdiQo5vgqynBCwuZHqvuJbnp4=; b=kCioNJa9h9/TOAw2mE5ft/dUx07pCMCZHKmN1lb4FaO0QbmAYFa3u6u+JIYu/9yy800w2EO7eiBzaxa4pdyjLXSKswoGp6a4zhlWxOquBIi4VM/xuKS/4s8wtEMA6lgX5iAhVFUhRI0xrTZwwII/HEwW9+hwfg1ZwImnUr0LwEhZ5uZqCSFN2oP6/N4LlxMq3QgFWI0rlfzojYFDIn7bg+CoBlb9JbP6Ket+EgMFOCAXSlh1+f28XQin51Jpnlao5MaY7SL0YXBjv+1v2KqQA98dTV+kZ1pUayhkZQnre3kyAy0/l4W+Zkb8NcVMf30aG8L6rw6l3AivKWbo7ujDKw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LLJWyDw56InSDJ0bZCR1Tcjx8ATNYoJP/fXLuBQwX7tVZngpTS2fGvWm2EIWDXJqxDGIIRdWuia+ztiQtDe+EgyTNCc74wOv6lrhm1b3AbbZ5PCEzNeonBJa2PsZI7u7naSH0nuiwqmjw/0t/gAIn2yZnLyCru22Od4sOQ8n8lMako0e6bYsgxAzXqRFExKx/RJrWW/BQuwY+ripgkMVyOKcqjjR3LjWq9yEU/2UsH4atC88FQT4gEOe28sWI+8qiMp6gwFFfD/rnoodEUEKn80c8JUGez1QGumm+MGbsKdH/I1znZfxMYN4mOy0gHItFQ+prU/Dulp1tgGF6xvQ8A==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Tim Deegan <tim@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, IanJackson <ian.jackson@xxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 23 Jul 2019 15:07:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVQTgCmOXaxQxPLUKxvpzXZTTLrabYJaWAgAARcbKAACa/gP//4aOAgAAEQOaAAAmMAA==
  • Thread-topic: [Xen-devel] [PATCH 2/2] xen: merge temporary vcpu pinning scenarios

On 23.07.2019 16:29, Juergen Gross wrote:
> On 23.07.19 16:14, Jan Beulich wrote:
>> On 23.07.2019 16:03, Jan Beulich wrote:
>>> On 23.07.2019 15:44, Juergen Gross wrote:
>>>> On 23.07.19 14:42, Jan Beulich wrote:
>>>>> v->processor gets latched into st->processor before raising the softirq,
>>>>> but can't the vCPU be moved elsewhere by the time the softirq handler
>>>>> actually gains control? If that's not possible (and if it's not obvious
>>>>> why, and as you can see it's not obvious to me), then I think a code
>>>>> comment wants to be added there.
>>>>
>>>> You are right, it might be possible for the vcpu to move around.
>>>>
>>>> OTOH is it really important to run the target vcpu exactly on the cpu
>>>> it is executing (or has last executed) at the time the NMI/MCE is being
>>>> queued? This is in no way related to the cpu the MCE or NMI has been
>>>> happening on. It is just a random cpu, and so it would be if we'd do the
>>>> cpu selection when the softirq handler is running.
>>>>
>>>> One question to understand the idea nehind all that: _why_ is the vcpu
>>>> pinned until it does an iret? I could understand if it would be pinned
>>>> to the cpu where the NMI/MCE was happening, but this is not the case.
>>>
>>> Then it was never finished or got broken, I would guess.
>>
>> Oh, no. The #MC side use has gone away in 3a91769d6e, without cleaning
>> up other code. So there doesn't seem to be any such requirement anymore.
> 
> So just to be sure: you are fine for me removing the pinning for NMIs?

No, not the pinning as a whole. The forced CPU0 affinity should still
remain. It's just that there's no correlation anymore between the CPU
a vCPU was running on and the CPU it is to be pinned to (temporarily).

What can go away if the #MC part of the logic.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.