[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/4] mitigate the per-pCPU blocking list may be too long



On Thu, Apr 27, 2017 at 10:44:26AM +0100, George Dunlap wrote:
>On Thu, Apr 27, 2017 at 1:43 AM, Chao Gao <chao.gao@xxxxxxxxx> wrote:
>> On Wed, Apr 26, 2017 at 05:39:57PM +0100, George Dunlap wrote:
>>>On 26/04/17 01:52, Chao Gao wrote:
>>>> VT-d PI introduces a per-pCPU blocking list to track the blocked vCPU
>>>> running on the pCPU. Theoretically, there are 32K domain on single
>>>> host, 128 vCPUs per domain. If all vCPUs are blocked on the same pCPU,
>>>> 4M vCPUs are in the same list. Travelling this issue consumes too
>>>> much time. We have discussed this issue in [1,2,3].
>>>>
>>>> To mitigate this issue, we proposed the following two method [3]:
>>>> 1. Evenly distributing all the blocked vCPUs among all pCPUs.
>>>
>>>So you're not actually distributing the *vcpus* among the pcpus (which
>>>would imply some interaction with the scheduler); you're distributing
>>>the vcpu PI wake-up interrupt between pcpus.  Is that right?
>>
>> Yes. I should describe things more clearly.
>>
>>>
>>>Doesn't having a PI on a different pcpu than where the vcpu is running
>>>mean at least one IPI to wake up that vcpu?  If so, aren't we imposing a
>>>constant overhead on basically every single interrupt, as well as
>>>increasing the IPI traffic, in order to avoid a highly unlikely
>>>theoretical corner case?
>>
>> If it will incur at least one more IPI, I can't agree more. I think it
>> depends on whether calling vcpu_unblock() to wake up a vCPU not running
>> on current pCPU will lead to a more IPI compared to the vCPU running
>> on the current pCPU. In my mind, different schedulers may differ on this 
>> point.
>
>Well I'm not aware of any way to tell another processor to do
>something in a timely manner other than with an IPI; and in any case
>that's the method that both credit1 and credit2 use.  It's true that
>not all vcpu_wake() calls will end up with an IPI, but a fairly large
>number of them will.  Avoiding this overhead when it's not necessary
>for performance is pretty important.

Ok, I agree and will avoid this overhead in next version. 
Really appreciate your comments.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.