[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv9 3/4] gnttab: make the grant table lock a read-write lock



On 21/05/15 11:32, Jan Beulich wrote:
>>>> On 20.05.15 at 17:54, <david.vrabel@xxxxxxxxxx> wrote:
>> @@ -254,23 +254,23 @@ double_gt_lock(struct grant_table *lgt, struct 
>> grant_table *rgt)
>>  {
>>      if ( lgt < rgt )
>>      {
>> -        spin_lock(&lgt->lock);
>> -        spin_lock(&rgt->lock);
>> +        write_lock(&lgt->lock);
>> +        write_lock(&rgt->lock);
>>      }
>>      else
>>      {
>>          if ( lgt != rgt )
>> -            spin_lock(&rgt->lock);
>> -        spin_lock(&lgt->lock);
>> +            write_lock(&rgt->lock);
>> +        write_lock(&lgt->lock);
>>      }
>>  }
> 
> Do both of them need to be write locks now?

I don't know.  This isn't used in any path I care about so I didn't
spend any time on analysing it and did the obviously correct change.

Someone who cares about the performance of this path is going to have to
do the work to further improve performance.

>> @@ -806,12 +806,13 @@ __gnttab_map_grant_ref(
>>          goto undo_out;
>>      }
>>  
>> -    double_gt_lock(lgt, rgt);
>> -
>>      if ( gnttab_need_iommu_mapping(ld) )
>>      {
>>          unsigned int wrc, rdc;
>>          int err = 0;
>> +
>> +        double_gt_lock(lgt, rgt);
>> +
>>          /* We're not translated, so we know that gmfns and mfns are
>>             the same things, so the IOMMU entry is always 1-to-1. */
>>          mapcount(lgt, rd, frame, &wrc, &rdc);
>> @@ -827,9 +828,11 @@ __gnttab_map_grant_ref(
>>              if ( (wrc + rdc) == 0 )
>>                  err = iommu_map_page(ld, frame, frame, IOMMUF_readable);
>>          }
>> +
>> +        double_gt_lock(lgt, rgt);
> 
> unlock. And with this code path actually used (due to the bug it's
> pretty clear it didn't get exercised in your testing), how does
> performance look like? 

I think it will be no worse than what it was before -- this path already
really sucks (mapcount() loops over 1000s of entries).  I don't care
about this path at all.

>> @@ -842,8 +845,6 @@ __gnttab_map_grant_ref(
>>      mt->ref   = op->ref;
>>      mt->flags = op->flags;
>>  
>> -    double_gt_unlock(lgt, rgt);
> 
> Don't the mt-> updates above need some kind of protection?

It depends:

If not gnttab_need_iommu_mapping() but we only need a write barrier
before the mt->flags store.

If gnttab_need_iommu_mapping() then a lock is required to prevent racing
with concurrent mapcount() calls.  This is not a path I care about so
its easiest to just keep the double lock around this.

>> @@ -2645,7 +2653,7 @@ __gnttab_swap_grant_ref(grant_ref_t ref_a, grant_ref_t 
>> ref_b)
>>      struct active_grant_entry *act_b = NULL;
>>      s16 rc = GNTST_okay;
>>  
>> -    spin_lock(&gt->lock);
>> +    write_lock(&gt->lock);
>>  
>>      /* Bounds check on the grant refs */
>>      if ( unlikely(ref_a >= nr_grant_entries(d->grant_table)))
>> @@ -2689,7 +2697,7 @@ out:
>>          active_entry_release(act_b);
>>      if ( act_a != NULL )
>>          active_entry_release(act_a);
>> -    spin_unlock(&gt->lock);
>> +    write_unlock(&gt->lock);
> 
> With the per-entry locks, does this still o be a write lock?

Yes, because we take two active entry locks and requiring the write lock
is easier than ordering the active_entry_acquire()s.

Again, this isn't a path I care about.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.