[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/4] VT-d PI: track the number of vcpus on pi blocking list



On Thu, Aug 31, 2017 at 02:33:57AM -0600, Jan Beulich wrote:
>>>> On 31.08.17 at 09:15, <chao.gao@xxxxxxxxx> wrote:
>> On Thu, Aug 31, 2017 at 01:42:53AM -0600, Jan Beulich wrote:
>>>>>> On 31.08.17 at 00:57, <chao.gao@xxxxxxxxx> wrote:
>>>> On Wed, Aug 30, 2017 at 10:00:49AM -0600, Jan Beulich wrote:
>>>>>>>> On 16.08.17 at 07:14, <chao.gao@xxxxxxxxx> wrote:
>>>>>> @@ -100,6 +101,24 @@ void vmx_pi_per_cpu_init(unsigned int cpu)
>>>>>>      spin_lock_init(&per_cpu(vmx_pi_blocking, cpu).lock);
>>>>>>  }
>>>>>>  
>>>>>> +static void vmx_pi_add_vcpu(struct pi_blocking_vcpu *pbv,
>>>>>> +                            struct vmx_pi_blocking_vcpu *vpbv)
>>>>>> +{
>>>>>> +    ASSERT(spin_is_locked(&vpbv->lock));
>>>>>
>>>>>You realize this is only a very weak check for a non-recursive lock?
>>>> 
>>>> I just thought the lock should be held when adding one entry to the
>>>> blocking list. Do you think we should remove this check or make it
>>>> stricter?
>>>
>>>Well, the primary purpose of my comment was to make you aware
>>>of the fact. If the weak check is good enough for you, then fine.
>> 
>> To be honest, I don't know the difference between weak check and tight
>> check.
>
>For non-recursive locks spin_is_locked() only tells you if _any_
>CPU in the system currently holds the lock. For recursive ones it
>checks whether it's the local CPU that owns the lock.

This remake is impressive to me.

>
>>>Removing the check would be a bad idea imo (but see also below);
>>>tightening might be worthwhile, but might also go too far (depending
>>>mainly on how clearly provable it is that all callers actually hold the
>>>lock).
>> 
>> IMO, the lock was introduced (not by me) to protect the blocking list.
>> list_add() and list_del() should be performed with the lock held. So I
>> think it is clear that all callers should hold the lock.
>
>Good.
>
>>>>>> +    add_sized(&vpbv->counter, 1);
>>>>>> +    ASSERT(read_atomic(&vpbv->counter));
>>>>>
>>>>>Why add_sized() and read_atomic() when you hold the lock?
>>>>>
>>>> 
>>>> In patch 3, frequent reading the counter is used to find a suitable
>>>> vcpu and we can use add_sized() and read_atomic() to avoid acquiring the
>>>> lock. In one word, the lock doesn't protect the counter.
>>>
>>>In that case it would be more natural to switch to the atomic
>>>accesses there. Plus you still wouldn't need read_atomic()
>>>here, with the lock held. Furthermore I would then wonder
>>>whether it wasn't better to use atomic_t for the counter at
>> 
>> Is there some basic guide on when it is better to use read_atomic()
>> and add_sized() and when it is better to define a atomic variable
>> directly?
>
>If an atomic_t variable fits your needs, I think it should always
>be preferred. add_sized() was introduced for a case where an
>atomic_t variable would not have been usable. Please also
>consult older commits for understanding the background.

Ok. I will. Thanks for your guide.

>
>>>that point. Also with a lock-less readers the requirement to
>>>hold a lock here (rather than using suitable LOCKed accesses)
>>>becomes questionable too.
>> 
>> As I said above, I think the lock is used to protect the list.
>> 
>> I think this patch has two parts:
>> 1. Move all list operations to two inline functions. (with this, adding
>> a counter is easier and don't need add code in several places.)
>> 
>> 2. Add a counter.
>
>With it being left unclear whether the counter is meant to
>also be protected by the lock: In the patch here you claim it
>is, yet by later introducing lock-less readers you weaken
>that model. Hence the request to bring things into a
>consistent state right away, and ideally also into the final
>state.

Sure. I will clarify this and make things consistent.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.