[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv6 2/3] grant_table: convert grant table rwlock to percpu rwlock



>>> On 22.01.16 at 14:41, <malcolm.crossley@xxxxxxxxxx> wrote:
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -178,6 +178,8 @@ struct active_grant_entry {
>  #define _active_entry(t, e) \
>      ((t)->active[(e)/ACGNT_PER_PAGE][(e)%ACGNT_PER_PAGE])
>  
> +DEFINE_PERCPU_RWLOCK_GLOBAL(grant_rwlock);
> +
>  static inline void gnttab_flush_tlb(const struct domain *d)
>  {
>      if ( !paging_mode_external(d) )
> @@ -208,7 +210,13 @@ active_entry_acquire(struct grant_table *t, grant_ref_t 
> e)
>  {
>      struct active_grant_entry *act;
>  
> -    ASSERT(rw_is_locked(&t->lock));
> +    /* 
> +     * The grant table for the active entry should be locked but the 
> +     * percpu rwlock cannot be checked for read lock without race conditions
> +     * or high overhead so we cannot use an ASSERT 
> +     *
> +     *   ASSERT(rw_is_locked(&t->lock));
> +     */

There are a number of trailing blanks being added here (and further
down), which I'm fixing up as I'm in the process of applying this. The
reason I noticed though is that this hunk ...

> @@ -660,7 +668,13 @@ static int grant_map_exists(const struct domain *ld,
>  {
>      unsigned int ref, max_iter;
>  
> -    ASSERT(rw_is_locked(&rgt->lock));
> +    /* 
> +     * The remote grant table should be locked but the percpu rwlock
> +     * cannot be checked for read lock without race conditions or high 
> +     * overhead so we cannot use an ASSERT 
> +     *
> +     *   ASSERT(rw_is_locked(&rgt->lock));
> +     */
>  
>      max_iter = min(*ref_count + (1 << GNTTABOP_CONTINUATION_ARG_SHIFT),
>                     nr_grant_entries(rgt));

... doesn't apply at all due to being white space damaged (the line
immediately preceding the ASSERT() which gets removed actually
has four blanks on it in the source tree (which is wrong, but should
nevertheless be reflected in your patch). Due to the other trailing
whitespace found above I can also exclude the mail system to have
eaten that white space on the way here, so I really wonder which
tree this patch got created against.

Considering the hassle with the first commit attempt yesterday,
may I please ask that you apply a little more care?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.