[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7] x86/p2m: use large pages for MMIO mappings



>>> On 08.02.16 at 19:04, <george.dunlap@xxxxxxxxxx> wrote:
> First -- and this isn't a blocker, but just a question (and sorry if
> it's been answered before) -- why have the "0 means I did it all, <nr
> means I did it partially"?  Why not just return the number of entries
> actually handled?

Because I view zero as the conventional "success" indicator. No
other deeper reason.

>> ---
>> Open issues (perhaps for subsequent changes):
>> - ARM side unimplemented (and hence libxc for now made cope with both
>>   models), the main issue (besides my inability to test any change
>>   there) being the many internal uses of map_mmio_regions())
>> - iommu_{,un}map_page() interfaces don't support "order" (hence
>>   mmio_order() for now returns zero when !iommu_hap_pt_share, which in
>>   particular means the AMD side isn't being taken care of just yet, but
>>   note that this also has the intended effect of suppressing non-zero
>>   order mappings in the shadow mode case)
> 
> This information -- about using iommu_hap_pt_share() to guard against
> both AMD and shadow mode -- should probably be in the tree somewhere;
> preferably in a comment above map_mmio_regions(), but at very least in
> the changelog (rather than in this "comment to reviewers" section, which
> will be thrown away on check-in).

Can be done of course.

>> @@ -949,14 +965,21 @@ static int set_typed_p2m_entry(struct do
>>  static int set_foreign_p2m_entry(struct domain *d, unsigned long gfn,
>>                                   mfn_t mfn)
>>  {
>> -    return set_typed_p2m_entry(d, gfn, mfn, p2m_map_foreign,
>> +    return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign,
>>                                 p2m_get_hostp2m(d)->default_access);
>>  }
>>  
>>  int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
>> -                       p2m_access_t access)
>> +                       unsigned int order, p2m_access_t access)
>>  {
>> -    return set_typed_p2m_entry(d, gfn, mfn, p2m_mmio_direct, access);
>> +    if ( order &&
>> +         rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn),
>> +                                 mfn_x(mfn) + (1UL << order) - 1) &&
>> +         !rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn),
>> +                                  mfn_x(mfn) + (1UL << order) - 1) )
>> +        return order;
> 
> What is this about?  Isn't set_mmio_p2m_entry() now supposed to have a
> calling convention like clear_mmio_p2m_entry(), where it returns
> recommended_order+1 if you should retry with order, and this value
> (recommended_order) is guaranteed to be strictly less than what was
> passed in?
> 
> Did you mean to return PAGE_ORDER_4k+1 here, perhaps?  (Or (order-9)+1?)

No. This is catering for callers of both {,ept_}p2m_type_to_flags()
not easily being in the position of requesting further page splitting.
Hence rather than complicate the code there, it is here where we
make sure that the r/o enforcement won't be missed.

>> --- a/xen/arch/x86/mm/p2m-ept.c
>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> @@ -136,6 +136,7 @@ static void ept_p2m_type_to_flags(struct
>>              entry->r = entry->x = 1;
>>              entry->w = !rangeset_contains_singleton(mmio_ro_ranges,
>>                                                      entry->mfn);
>> +            ASSERT(entry->w || !is_epte_superpage(entry));
> 
> What is this about?  Single-page epte entries are allowed to be either
> read-only or read-write, but superpage entries have to have write
> permission?

Yes, due to the lack of an "order" input here, and (as mentioned
above) the difficulties of this function's callers otherwise needing
to be able to deal with further page splitting directives (coming
out of this function).

But now that you mention this I think you're right further up:
That code as it is now indeed seems to put things up for the
ASSERT() here to actually be triggerable (if there would be a
large enough r/o MMIO region). So it looks like code further
up should indeed use PAGE_ORDER_4K+1 here, as you suggest
(which would then also address one of Andrew's concerns).

>> @@ -107,6 +108,8 @@ static unsigned long p2m_type_to_flags(p
>>      case p2m_mmio_direct:
>>          if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
>>              flags |= _PAGE_RW;
>> +        else
>> +            ASSERT(!level);
> 
> Why are we not allowed to have superpage MMIO ranges in this case?  Is
> it just because the AMD side is unimplemented and the shadow side it
> isn't allowed?
> 
> If so, a comment to that effect would be helpful.

Actually the primary reason is because the rangeset check is
(and right now can only be) a singleton one. I.e. this is again
related to the complications which would arise if a page split
was to be requested from here.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.