[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv7 1/3] gnttab: Introduce rwlock to protect updates to grant table state



>>> On 30.04.15 at 15:28, <david.vrabel@xxxxxxxxxx> wrote:
> --- a/docs/misc/grant-tables.txt
> +++ b/docs/misc/grant-tables.txt
> @@ -74,7 +74,33 @@ is complete.
>   matching map track entry is then removed, as if unmap had been invoked.
>   These are not used by the transfer mechanism.
>    map->domid         : owner of the mapped frame
> -  map->ref_and_flags : grant reference, ro/rw, mapped for host or device 
> access
> +  map->ref           : grant reference
> +  map->flags         : ro/rw, mapped for host or device access
> +
> +********************************************************************************
> + Locking
> + ~~~~~~~
> + Xen uses several locks to serialize access to the internal grant table 
> state.
> +
> +  grant_table->lock          : rwlock used to prevent readers from accessing
> +                               inconsistent grant table state such as current
> +                               version, partially initialized active table 
> pages,
> +                               etc.
> +  grant_table->maptrack_lock : spinlock used to protect the maptrack state
> +
> + The primary lock for the grant table is a read/write spinlock. All
> + functions that access members of struct grant_table must acquire a
> + read lock around critical sections. Any modification to the members
> + of struct grant_table (e.g., nr_status_frames, nr_grant_frames,
> + active frames, etc.) must only be made if the write lock is
> + held. These elements are read-mostly, and read critical sections can
> + be large, which makes a rwlock a good choice.
> +
> + The maptrack state is protected by its own spinlock. Any access (read
> + or write) of struct grant_table members that have a "maptrack_"
> + prefix must be made while holding the maptrack lock. The maptrack
> + state can be rapidly modified under some workloads, and the critical
> + sections are very small, thus we use a spinlock to protect them.

Still no word about nesting between the two?

> @@ -629,7 +629,7 @@ __gnttab_map_grant_ref(
>      }
>  
>      rgt = rd->grant_table;
> -    spin_lock(&rgt->lock);
> +    read_lock(&rgt->lock);
>  
>      if ( rgt->gt_version == 0 )
>          PIN_FAIL(unlock_out, GNTST_general_error,
> @@ -702,8 +702,6 @@ __gnttab_map_grant_ref(
>  
>      cache_flags = (shah->flags & (GTF_PAT | GTF_PWT | GTF_PCD) );
>  
> -    spin_unlock(&rgt->lock);
> -
>      /* pg may be set, with a refcount included, from __get_paged_frame */
>      if ( !pg )
>      {
> @@ -778,7 +776,7 @@ __gnttab_map_grant_ref(
>          goto undo_out;
>      }
>  
> -    double_gt_lock(lgt, rgt);
> +    double_maptrack_lock(lgt, rgt);

So looking just at the patch context here together with the two earlier
hunks: You now keep holding the rwlock, but the code following the
undo_out label read-locks the lock another time.

> @@ -1290,10 +1294,14 @@ gnttab_unpopulate_status_frames(struct domain *d, 
> struct grant_table *gt)
>      gt->nr_status_frames = 0;
>  }
>  
> +/* 
> + * Grow the grant table. The caller must hold the grant table's
> + * write lock before calling this function.
> + */
>  int
>  gnttab_grow_table(struct domain *d, unsigned int req_nr_frames)
>  {
> -    /* d's grant table lock must be held by the caller */
> +    /* d's grant table write lock must be held by the caller */

This comment is now redundant with the one you add ahead of
the function.

> @@ -3052,7 +3068,8 @@ gnttab_release_mappings(
>          }
>  
>          rgt = rd->grant_table;
> -        spin_lock(&rgt->lock);
> +        read_lock(&rgt->lock);
> +        double_maptrack_lock(gt, rgt);

What is it that requires gt's maptrack lock to also be acquired here?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.