|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6] x86/p2m: use large pages for MMIO mappings
On 02/02/16 13:24, Jan Beulich wrote:
>>>> On 01.02.16 at 16:00, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 01/02/16 09:14, Jan Beulich wrote:
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -899,48 +899,64 @@ void p2m_change_type_range(struct domain
>>> p2m_unlock(p2m);
>>> }
>>>
>>> -/* Returns: 0 for success, -errno for failure */
>>> +/*
>>> + * Returns:
>>> + * 0 for success
>>> + * -errno for failure
>>> + * 1 + new order for caller to retry with smaller order (guaranteed
>>> + * to be smaller than order passed in)
>>> + */
>>> static int set_typed_p2m_entry(struct domain *d, unsigned long gfn, mfn_t
>> mfn,
>>> - p2m_type_t gfn_p2mt, p2m_access_t access)
>>> + unsigned int order, p2m_type_t gfn_p2mt,
>>> + p2m_access_t access)
>>> {
>>> int rc = 0;
>>> p2m_access_t a;
>>> p2m_type_t ot;
>>> mfn_t omfn;
>>> + unsigned int cur_order = 0;
>>> struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>
>>> if ( !paging_mode_translate(d) )
>>> return -EIO;
>>>
>>> - gfn_lock(p2m, gfn, 0);
>>> - omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL, NULL);
>>> + gfn_lock(p2m, gfn, order);
>>> + omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, &cur_order, NULL);
>>> + if ( cur_order < order )
>>> + {
>>> + gfn_unlock(p2m, gfn, order);
>>> + return cur_order + 1;
>>> + }
>>> if ( p2m_is_grant(ot) || p2m_is_foreign(ot) )
>>> {
>>> - gfn_unlock(p2m, gfn, 0);
>>> + gfn_unlock(p2m, gfn, order);
>>> domain_crash(d);
>>> return -ENOENT;
>>> }
>>> else if ( p2m_is_ram(ot) )
>>> {
>>> - ASSERT(mfn_valid(omfn));
>>> - set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY);
>>> + unsigned long i;
>>> +
>>> + for ( i = 0; i < (1UL << order); ++i )
>>> + {
>>> + ASSERT(mfn_valid(_mfn(mfn_x(omfn) + i)));
>>> + set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
>> On further consideration, shouldn't we have a preemption check here?
>> Removing a 1GB superpage's worth of RAM mappings is going to execute for
>> an unreasonably long time.
> Maybe. We have 256k iteration loops elsewhere, so I'm not that
> concerned. The thing probably needing adjustment would then be
> map_mmio_regions(), to avoid multiplying the 256k here by the up
> to 64 iterations done there. Preempting here is not really
> possible, as we're holding the p2m lock.
Why is this problematic? All that needs to happen is to -ERESTART out
to a point where the p2m lock is dropped.
>
> The only other alternative I see would be to disallow 1G mappings
> and only support 2M ones.
>
> Thoughts?
For now, restricting to 2M mappings at least limits the potential damage
while gaining some benefits of large MMIO mappings.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |