[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.



>>> On 02.09.16 at 12:47, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> @@ -5551,7 +5553,35 @@ static int hvmop_map_mem_type_to_ioreq_server(
>      if ( rc != 0 )
>          goto out;
>  
> -    rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags);
> +    if ( gfn == 0 )
> +        rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags);
> +
> +    /*
> +     * Iterate p2m table when an ioreq server unmaps from p2m_ioreq_server,
> +     * and reset the remaining p2m_ioreq_server entries back to p2m_ram_rw.
> +     */
> +    if ( op.flags == 0 && rc == 0 )
> +    {
> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +        while ( read_atomic(&p2m->ioreq.entry_count) &&
> +                gfn <= p2m->max_mapped_pfn )
> +        {
> +            unsigned long gfn_end = gfn + HVMOP_op_mask;
> +
> +            p2m_finish_type_change(d, gfn, gfn_end,
> +                                   p2m_ioreq_server, p2m_ram_rw);
> +
> +            /* Check for continuation if it's not the last iteration. */
> +            if ( ++gfn_end <= p2m->max_mapped_pfn &&
> +                hypercall_preempt_check() )
> +            {
> +                rc = -ERESTART;
> +                *iter = gfn_end;
> +                goto out;
> +            }
> +        }

"gfn" doesn't get updated, so if no preemption happens here, the
same GFN range will get acted on a second time (and so on, until
eventually preemption becomes necessary).

Also when you don't really need to use "goto", please try to avoid
it - here you could easily use just "break" instead.

> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -545,6 +545,9 @@ static int resolve_misconfig(struct p2m_domain *p2m, 
> unsigned long gfn)
>                      e.ipat = ipat;
>                      if ( e.recalc && p2m_is_changeable(e.sa_p2mt) )
>                      {
> +                         if ( e.sa_p2mt == p2m_ioreq_server )
> +                             p2m->ioreq.entry_count--;

ISTR that George had asked you to put this accounting elsewhere.

And then I'd really like you to add assertions making sure this
count doesn't underflow.

> @@ -965,7 +968,8 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
>      if ( is_epte_valid(ept_entry) )
>      {
>          if ( (recalc || ept_entry->recalc) &&
> -             p2m_is_changeable(ept_entry->sa_p2mt) )
> +             p2m_is_changeable(ept_entry->sa_p2mt) &&
> +             (ept_entry->sa_p2mt != p2m_ioreq_server) )
>              *t = p2m_is_logdirty_range(p2m, gfn, gfn) ? p2m_ram_logdirty
>                                                        : p2m_ram_rw;
>          else

Considering this (and at least one more similar adjustment further
down), is it really appropriate to include p2m_ioreq_server in the
set of "changeable" types?

> +void p2m_finish_type_change(struct domain *d,
> +                           unsigned long start, unsigned long end,
> +                           p2m_type_t ot, p2m_type_t nt)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_type_t t;
> +    unsigned long gfn = start;
> +
> +    ASSERT(start <= end);
> +    ASSERT(ot != nt);
> +    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> +
> +    p2m_lock(p2m);
> +
> +    end = (end > p2m->max_mapped_pfn) ? p2m->max_mapped_pfn : end;

min() or an if() without else.

> +    while ( gfn <= end )
> +    {
> +        get_gfn_query_unlocked(d, gfn, &t);
> +
> +        if ( t == ot )
> +            p2m_change_type_one(d, gfn, t, nt);
> +
> +        gfn++;
> +    }
> +
> +    p2m_unlock(p2m);
> +}

And then I wonder why p2m_change_type_range() can't be used
instead of adding this new function.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.