[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 15/22] xen/arm: p2m: Re-implement relinquish_p2m_mapping using p2m_get_entry



On Thu, 28 Jul 2016, Julien Grall wrote:
> The current implementation of relinquish_p2m_mapping is modifying the
> page table to erase the entry one by one. However, this is not necessary
> because the domain is not running anymore and therefore will speed up
> the domain destruction.

Could you please elaborate on this? Who is going to remove the p2m
entries if not this function?


> The function relinquish_p2m_mapping can be re-implemented using
> p2m_get_entry by iterating over the range mapped and using the mapping
> order given by the callee.
> 
> Given that the preemption was chosen arbitrarily, it is no done on every
                                                          ^ now?

> 512 iterations. Meaning that Xen may check more often if the function is
> preempted when there are no mappings.
> 
> Finally drop the operation RELINQUISH in apply_* as nobody is using it
> anymore. Note that the functions could have been dropped in one go at
> the end, however I find easier to drop the operations one by one
> avoiding a big deletion in the patch that remove the last operation.
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
> 
> ---
>     Further investigation needs to be done before applying this patch to
>     check if someone could take advantage of this change (such
>     modifying an entry which was relinquished).
> ---
>  xen/arch/arm/p2m.c | 70 
> ++++++++++++++++++++++++++++++++++++++++--------------
>  1 file changed, 52 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index e7697bb..d0aba5b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -721,7 +721,6 @@ static int p2m_mem_access_radix_set(struct p2m_domain 
> *p2m, gfn_t gfn,
>  enum p2m_operation {
>      INSERT,
>      REMOVE,
> -    RELINQUISH,
>      MEMACCESS,
>  };
>  
> @@ -908,7 +907,6 @@ static int apply_one_level(struct domain *d,
>  
>          break;
>  
> -    case RELINQUISH:
>      case REMOVE:
>          if ( !p2m_valid(orig_pte) )
>          {
> @@ -1092,17 +1090,6 @@ static int apply_p2m_changes(struct domain *d,
>          {
>              switch ( op )
>              {
> -            case RELINQUISH:
> -                /*
> -                 * Arbitrarily, preempt every 512 operations or 8192 nops.
> -                 * 512*P2M_ONE_PROGRESS == 8192*P2M_ONE_PROGRESS_NOP == 
> 0x2000
> -                 * This is set in preempt_count_limit.
> -                 *
> -                 */
> -                p2m->lowest_mapped_gfn = _gfn(addr >> PAGE_SHIFT);
> -                rc = -ERESTART;
> -                goto out;
> -
>              case MEMACCESS:
>              {
>                  /*
> @@ -1508,16 +1495,63 @@ int p2m_init(struct domain *d)
>      return rc;
>  }
>  
> +/*
> + * The function will go through the p2m and remove page reference when it
> + * is required.
> + * The mapping are left intact in the p2m. This is fine because the
> + * domain will never run at that point.
> + *
> + * XXX: Check what does it mean for other part (such as lookup)
> + */
>  int relinquish_p2m_mapping(struct domain *d)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
> -    unsigned long nr;
> +    unsigned long count = 0;
> +    p2m_type_t t;
> +    int rc = 0;
> +    unsigned int order;
> +
> +    /* Convenience alias */
> +    gfn_t start = p2m->lowest_mapped_gfn;
> +    gfn_t end = p2m->max_mapped_gfn;
>  
> -    nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
> +    p2m_write_lock(p2m);
>  
> -    return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
> -                             INVALID_MFN, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +    for ( ; gfn_x(start) < gfn_x(end); start = gfn_add(start, 1UL << order) )
> +    {
> +        mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order);
> +
> +        count++;
> +        /*
> +         * Arbitrarily preempt every 512 iterations.
> +         */
> +        if ( !(count % 512) && hypercall_preempt_check() )
> +        {
> +            rc = -ERESTART;
> +            break;
> +        }
> +
> +        /* Skip hole and any superpage */
> +        if ( mfn_eq(mfn, INVALID_MFN) || order != 0 )
> +            /*
> +             * The order corresponds to the order of the mapping in the
> +             * page table. So we need to align the GFN before
> +             * incrementing.
> +             */
> +            start = _gfn(gfn_x(start) & ~((1UL << order) - 1));
> +        else
> +            p2m_put_l3_page(mfn, t);
> +    }
> +
> +    /*
> +     * Update lowest_mapped_gfn so on the next call we still start where
> +     * we stopped.
> +     */
> +    p2m->lowest_mapped_gfn = start;
> +
> +    p2m_write_unlock(p2m);
> +
> +    return rc;
>  }
>  
>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.