[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 2/6] evtchn: convert domain event lock to an r/w one



On 27.05.2021 13:01, Roger Pau Monné wrote:
> On Wed, Jan 27, 2021 at 09:16:07AM +0100, Jan Beulich wrote:
>> Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
>> across pCPU-s) and the ones in EOI handling in PCI pass-through code,
>> serializing perhaps an entire domain isn't helpful when no state (which
>> isn't e.g. further protected by the per-channel lock) changes.
> 
> I'm unsure this move is good from a performance PoV, as the operations
> switched to use the lock in read mode is a very small subset, and then
> the remaining operations get a performance penalty when compared to
> using a plain spin lock.

Well, yes, unfortunately review of earlier versions has resulted in
there being quite a few less read_lock() uses now than I had
(mistakenly) originally. There are a few worthwhile conversions,
but on the whole maybe I should indeed drop this change.

>> @@ -1510,9 +1509,10 @@ int evtchn_destroy(struct domain *d)
>>  {
>>      unsigned int i;
>>  
>> -    /* After this barrier no new event-channel allocations can occur. */
>> +    /* After this kind-of-barrier no new event-channel allocations can 
>> occur. */
>>      BUG_ON(!d->is_dying);
>> -    spin_barrier(&d->event_lock);
>> +    read_lock(&d->event_lock);
>> +    read_unlock(&d->event_lock);
> 
> Don't you want to use write mode here to assure there are no read
> users that have taken the lock before is_dying has been set, and thus
> could make wrong assumptions?
> 
> As I understand the point of the barrier here is to ensure there are
> no lockers carrier over from before is_dying has been set.

The purpose is, as the comment says, no new event channel allocations.
Those happen under write lock, so a read-lock-based barrier is enough
here afaict.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.