[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/2] gnttab: refactor locking for scalability



>>> On 03.12.14 at 15:29, <chegger@xxxxxxxxx> wrote:
> @@ -182,6 +185,29 @@ nr_active_grant_frames(struct grant_table *gt)
>      return num_act_frames_from_sha_frames(nr_grant_frames(gt));
>  }
>  
> +static inline struct active_grant_entry *
> +active_entry_acquire(struct grant_table *t, grant_ref_t e)
> +{
> +    struct active_grant_entry *act;
> +
> +#ifndef NDEBUG
> +    /* not perfect, but better than nothing for a debug build
> +     * sanity check
> +     */
> +    BUG_ON(!rw_is_locked(&t->lock));
> +#endif

ASSERT() is the canonical way to achieve this.

> @@ -508,17 +511,27 @@ static int grant_map_exists(const struct domain *ld,
>                     nr_grant_entries(rgt));
>      for ( ref = *ref_count; ref < max_iter; ref++ )
>      {
> -        act = &active_entry(rgt, ref);
> +        act = active_entry_acquire(rgt, ref);
>  
>          if ( !act->pin )
> +        {
> +            active_entry_release(act);
>              continue;
> +        }
>  
>          if ( act->domid != ld->domain_id )
> +        {
> +            active_entry_release(act);
>              continue;
> +        }
>  
>          if ( act->frame != mfn )
> +        {
> +            active_entry_release(act);
>              continue;
> +        }

With this repeated 3 times, perhaps simpler and easier to read if you
put this ahead of the increment inside the for() header?

> @@ -531,24 +544,44 @@ static int grant_map_exists(const struct domain *ld,
>      return -EINVAL;
>  }
>  
> +/* Count the number of mapped ro or rw entries tracked in the left
> + * grant table given a provided mfn provided by a foreign domain. 
> + *
> + * This function takes the maptrack lock from the left grant table and
> + * must be called with the right grant table's rwlock held.
> + */
>  static void mapcount(
>      struct grant_table *lgt, struct domain *rd, unsigned long mfn,
>      unsigned int *wrc, unsigned int *rdc)
>  {
>      struct grant_mapping *map;
>      grant_handle_t handle;
> +    struct active_grant_entry *act;

I don't see the need for this new local variable.

>      *wrc = *rdc = 0;
>  
> +    /* N.B.: while taking the left side maptrack spinlock prevents
> +     * any mapping changes, the right side active entries could be
> +     * changing while we are counting. To be perfectly correct, the
> +     * caller should hold the grant table write lock, or some other
> +     * mechanism should be used to prevent concurrent changes during
> +     * this operation. This is tricky because we can't promote a read
> +     * lock into a write lock.
> +     */
> +    spin_lock(&lgt->maptrack_lock);

We can't afford being less than "perfectly correct" here, as the
insertion/removal of IOMMU mappings is driven by the result of
this function. Or wait, together with the comment ahead of the
function, are you perhaps simply meaning "has to" instead of
"should", and perhaps without the "To be perfectly correct" pre-
condition? Also, to prevent changes holding the grant table lock
for reading ought to be sufficient.  Which you then should
ASSERT() is the case just like you do in active_entry_acquire().

> @@ -692,12 +724,9 @@ __gnttab_map_grant_ref(
>              GNTPIN_hstr_inc : GNTPIN_hstw_inc;
>  
>      frame = act->frame;
> -    act_pin = act->pin;
>  
>      cache_flags = (shah->flags & (GTF_PAT | GTF_PWT | GTF_PCD) );
>  
> -    read_unlock(&rgt->lock);
> -
>      /* pg may be set, with a refcount included, from __get_paged_frame */
>      if ( !pg )
>      {
> @@ -772,8 +801,6 @@ __gnttab_map_grant_ref(
>          goto undo_out;
>      }
>  
> -    double_gt_lock(lgt, rgt);
> -
>      if ( gnttab_need_iommu_mapping(ld) )
>      {
>          unsigned int wrc, rdc;

Taking these two together - isn't patch 1 wrong then, in that it
acquires both maptrack locks in double_gt_lock()?

> @@ -891,9 +917,9 @@ __gnttab_unmap_common(
>      ld = current->domain;
>      lgt = ld->grant_table;
>  
> -    read_lock(&lgt->lock);
>      op->frame = (unsigned long)(op->dev_bus_addr >> PAGE_SHIFT);
>  
> +    read_lock(&lgt->lock);
>      if ( unlikely(op->handle >= lgt->maptrack_limit) )
>      {
>          read_unlock(&lgt->lock);

This should be done this way in patch 1.

> @@ -933,19 +959,18 @@ __gnttab_unmap_common(
>      TRACE_1D(TRC_MEM_PAGE_GRANT_UNMAP, dom);
>  
>      rgt = rd->grant_table;
> -    double_gt_lock(lgt, rgt);
>  
>      op->flags = op->map->flags;
>      if ( unlikely(!op->flags) || unlikely(op->map->domid != dom) )

How are these checks guarded now?

> @@ -1310,6 +1341,10 @@ gnttab_grow_table(struct domain *d, unsigned int 
> req_nr_frames)
>          if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
>              goto active_alloc_failed;
>          clear_page(gt->active[i]);
> +        for ( j = 0; j < ACGNT_PER_PAGE; j++ )
> +        {
> +            spin_lock_init(&gt->active[i][j].lock);
> +        }

Please omit the braces when the body is just a single statement.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.