[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv8 1/3] gnttab: Introduce rwlock to protect updates to grant table state



>>> On 12.05.15 at 16:15, <david.vrabel@xxxxxxxxxx> wrote:
> @@ -629,7 +629,7 @@ __gnttab_map_grant_ref(
>      }
>  
>      rgt = rd->grant_table;
> -    spin_lock(&rgt->lock);
> +    read_lock(&rgt->lock);
>  
>      if ( rgt->gt_version == 0 )
>          PIN_FAIL(unlock_out, GNTST_general_error,
> @@ -702,7 +702,7 @@ __gnttab_map_grant_ref(
>  
>      cache_flags = (shah->flags & (GTF_PAT | GTF_PWT | GTF_PCD) );
>  
> -    spin_unlock(&rgt->lock);
> +    read_unlock(&rgt->lock);

Lock dropped once, and ...

> @@ -778,7 +778,7 @@ __gnttab_map_grant_ref(
>          goto undo_out;
>      }
>  
> -    double_gt_lock(lgt, rgt);
> +    double_maptrack_lock(lgt, rgt);
>  
>      if ( gnttab_need_iommu_mapping(ld) )
>      {
> @@ -801,7 +801,7 @@ __gnttab_map_grant_ref(
>          }
>          if ( err )
>          {
> -            double_gt_unlock(lgt, rgt);
> +            double_maptrack_unlock(lgt, rgt);
>              rc = GNTST_general_error;
>              goto undo_out;
>          }
> @@ -814,7 +814,8 @@ __gnttab_map_grant_ref(
>      mt->ref   = op->ref;
>      mt->flags = op->flags;
>  
> -    double_gt_unlock(lgt, rgt);
> +    double_maptrack_unlock(lgt, rgt);
> +    read_unlock(&rgt->lock);

... again without being re-acquired another time. Since this appears
to be a recurring theme on this first patch - did this patch alone ever
get tested (I'm sure I'll again find one of the following patches to fix
this issue as a "side effect")?

> @@ -939,7 +941,9 @@ __gnttab_unmap_common(
>      TRACE_1D(TRC_MEM_PAGE_GRANT_UNMAP, dom);
>  
>      rgt = rd->grant_table;
> -    double_gt_lock(lgt, rgt);
> +
> +    read_lock(&rgt->lock);
> +    double_maptrack_lock(lgt, rgt);
>  
>      op->flags = op->map->flags;
>      if ( unlikely(!op->flags) || unlikely(op->map->domid != dom) )
> @@ -1009,7 +1013,9 @@ __gnttab_unmap_common(
>           gnttab_mark_dirty(rd, op->frame);
>  
>   unmap_out:
> -    double_gt_unlock(lgt, rgt);
> +    double_maptrack_unlock(lgt, rgt);
> +    read_unlock(&rgt->lock);

Am I right to conclude from this that the fix for the issue above is
to add a read_lock(), not to drop the unbalanced read_unlock()?

Which then of course raises the question - is taking the read lock
here and in several other places really sufficient? The thing that the
original spin lock appears to protect here is not only the grant table
structure itself, but also the active entry. I.e. relaxing to a read
lock would seem possible only after having put per-active-entry
locking in place.

> @@ -64,6 +64,11 @@ struct grant_mapping {
>  
>  /* Per-domain grant information. */
>  struct grant_table {
> +    /* 
> +     * Lock protecting updates to grant table state (version, active
> +     * entry list, etc.)
> +     */
> +    rwlock_t              lock;
>      /* Table size. Number of frames shared with guest */
>      unsigned int          nr_grant_frames;
>      /* Shared grant table (see include/public/grant_table.h). */
> @@ -82,8 +87,8 @@ struct grant_table {
>      struct grant_mapping **maptrack;
>      unsigned int          maptrack_head;
>      unsigned int          maptrack_limit;
> -    /* Lock protecting updates to active and shared grant tables. */
> -    spinlock_t            lock;
> +    /* Lock protecting the maptrack page list, head, and limit */
> +    spinlock_t            maptrack_lock;
>      /* The defined versions are 1 and 2.  Set to 0 if we don't know
>         what version to use yet. */
>      unsigned              gt_version;

So in order for fields protected by one of the locks to be as likely
as possible on the same cache line as the lock itself, I think
gt_version should also be moved up in the structure. We may
even want/need to add some separator between basic and
maptrack data to ensure they end up on different cache lines.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.