[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 6/6] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.



>>> On 05.04.17 at 11:11, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:

> 
> On 4/3/2017 11:23 PM, Jan Beulich wrote:
>>>>> On 02.04.17 at 14:24, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> After an ioreq server has unmapped, the remaining p2m_ioreq_server
>>> entries need to be reset back to p2m_ram_rw. This patch does this
>>> synchronously by iterating the p2m table.
>>>
>>> The synchronous resetting is necessary because we need to guarantee
>>> the p2m table is clean before another ioreq server is mapped. And
>>> since the sweeping of p2m table could be time consuming, it is done
>>> with hypercall continuation.
>>>
>>> Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
>> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
>> albeit I think ...
>>
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -1031,6 +1031,35 @@ void p2m_change_type_range(struct domain *d,
>>>       p2m_unlock(p2m);
>>>   }
>>>   
>>> +/* Synchronously modify the p2m type for a range of gfns from ot to nt. */
>>> +void p2m_finish_type_change(struct domain *d,
>>> +                            gfn_t first_gfn, unsigned long max_nr,
>>> +                            p2m_type_t ot, p2m_type_t nt)
>>> +{
>>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +    p2m_type_t t;
>>> +    unsigned long gfn = gfn_x(first_gfn);
>>> +    unsigned long last_gfn = gfn + max_nr - 1;
>>> +
>>> +    ASSERT(ot != nt);
>>> +    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
>>> +
>>> +    p2m_lock(p2m);
>>> +
>>> +    last_gfn = min(last_gfn, p2m->max_mapped_pfn);
>>> +    while ( gfn <= last_gfn )
>>> +    {
>>> +        get_gfn_query_unlocked(d, gfn, &t);
>>> +
>>> +        if ( t == ot )
>>> +            p2m_change_type_one(d, gfn, t, nt);
>>> +
>>> +        gfn++;
>>> +    }
>>> +
>>> +    p2m_unlock(p2m);
>>> +}
>> ... this could be improved for reduction of overall latency: A fixed
>> limit as input if not very useful, as it does matter quite a bit whether
>> need to call p2m_change_type_one(). Doing 256 iteration with no
>> single call is going to be pretty fast I think, so it would be desirable
>> to weigh differently iterations with and without call to that function.
> 
> Oh, thanks for your advice, very enlightening.
> With code freeze date approaching, how about we solve this in a separate 
> patch? :)

Well, that's what I've tried to express by giving my R-b with the
rest simply being a remark.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.