[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/2] gnttab: Introduce rwlock to protect updates to grant table state



>>> On 10.02.15 at 13:51, <chegger@xxxxxxxxx> wrote:
> @@ -629,7 +629,7 @@ __gnttab_map_grant_ref(
>      }
>  
>      rgt = rd->grant_table;
> -    spin_lock(&rgt->lock);
> +    read_lock(&rgt->lock);
>  
>      if ( rgt->gt_version == 0 )
>          PIN_FAIL(unlock_out, GNTST_general_error,
> @@ -702,8 +702,6 @@ __gnttab_map_grant_ref(
>  
>      cache_flags = (shah->flags & (GTF_PAT | GTF_PWT | GTF_PCD) );
>  
> -    spin_unlock(&rgt->lock);
> -
>      /* pg may be set, with a refcount included, from __get_paged_frame */
>      if ( !pg )
>      {
> @@ -778,7 +776,7 @@ __gnttab_map_grant_ref(
>          goto undo_out;

With the unlock above dropped, this and other similar goto-s now
end up acquiring the lock a second time, but then dropping it only
once. As said before (on a different piece of code) altering the
regions which get locked should be made explicit in the patch
description, allowing - if not spotted during review - later bug
fixing to be done with proper understanding of the intentions.

>      }
>  
> -    double_gt_lock(lgt, rgt);
> +    double_maptrack_lock(lgt, rgt);

Looking at this nested locking I find that you _still_ don't say
anything about lock hierarchy in the doc change.

> @@ -1971,17 +1979,23 @@ __acquire_grant_for_copy(
>                  PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                           "transitive grant referenced bad domain %d\n",
>                           trans_domid);
> -            spin_unlock(&rgt->lock);
> +
> +            /* 
> +             * __acquire_grant_for_copy() could take the read lock on
> +             * the right table (if rd == td), so we have to drop the
> +             * lock here and reacquire
> +             */

Almost - there's still a full stop missing here.

> @@ -3129,7 +3147,9 @@ grant_table_destroy(
>  
>      if ( t == NULL )
>          return;
> -    
> +
> +    write_lock(&t->lock);
> +
>      for ( i = 0; i < nr_grant_frames(t); i++ )
>          free_xenheap_page(t->shared_raw[i]);
>      xfree(t->shared_raw);
> @@ -3146,6 +3166,8 @@ grant_table_destroy(
>          free_xenheap_page(t->status[i]);
>      xfree(t->status);
>  
> +    write_unlock(&t->lock);

Do we really need the lock here? If so (as also said before) this
ought to be a separate fix (needing backporting).

> --- a/xen/include/xen/grant_table.h
> +++ b/xen/include/xen/grant_table.h
> @@ -82,10 +82,17 @@ struct grant_table {
>      struct grant_mapping **maptrack;
>      unsigned int          maptrack_head;
>      unsigned int          maptrack_limit;
> -    /* Lock protecting updates to active and shared grant tables. */
> -    spinlock_t            lock;
> -    /* The defined versions are 1 and 2.  Set to 0 if we don't know
> -       what version to use yet. */
> +    /* Lock protecting the maptrack page list, head, and limit */
> +    spinlock_t            maptrack_lock;
> +    /* 
> +     * Lock protecting updates to grant table state (version, active
> +     * entry list, etc.)
> +     */
> +    rwlock_t              lock;

Repeating the recently raised similar question (in another context):
Is it really a good idea to have two locks on the same cache line?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.