[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2] x86/p2m: use large pages for MMIO mappings



>>> On 23.09.15 at 18:22, <george.dunlap@xxxxxxxxxx> wrote:
> On 09/22/2015 01:56 PM, Jan Beulich wrote:
>> When mapping large BARs (e.g. the frame buffer of a graphics card) the
>> overhead of establishing such mappings using only 4k pages has,
>> particularly after the XSA-125 fix, become unacceptable. Alter the
>> XEN_DOMCTL_memory_mapping semantics once again, so that there's no
>> longer a fixed amount of guest frames that represents the upper limit
>> of what a single invocation can map. Instead bound execution time by
>> limiting the number of iterations (regardless of page size).
> 
> FWIW most of the code looks OK to me.
> 
> The final version should probably mention somewhere exactly what the
> changed semantics are -- specifically, that returning >0 means, "I
> mapped this many, keep mapping the rest".

Done, but this is subject to change anyway (because of ...

>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> RFC reasons:
>> - ARM side unimplemented (and hence libxc for now made cope with both
>>   models), the main issue (besides my inability to test any change
>>   there) being the many internal uses of map_mmio_regions())
>> - error unmapping in map_mmio_regions() and error propagation to caller
>>   from unmap_mmio_regions() are not really satisfactory (for the latter
>>   a possible model might be to have the function - and hence the
>>   domctl - return the [non-zero] number of completed entries upon
>>   error, requiring the caller to re-invoke the hypercall to then obtain
>>   the actual error for the failed slot)

... this open question).

>> @@ -2240,19 +2240,24 @@ int xc_domain_memory_mapping(
>>          domctl.u.memory_mapping.nr_mfns = nr;
>>          domctl.u.memory_mapping.first_gfn = first_gfn + done;
>>          domctl.u.memory_mapping.first_mfn = first_mfn + done;
>> -        err = do_domctl(xch, &domctl);
>> -        if ( err && errno == E2BIG )
>> +        rc = do_domctl(xch, &domctl);
>> +        if ( rc < 0 && errno == E2BIG )
>>          {
>>              if ( max_batch_sz <= 1 )
>>                  break;
>>              max_batch_sz >>= 1;
>>              continue;
>>          }
>> +        if ( rc > 0 )
>> +        {
>> +            done += rc;
>> +            continue;
>> +        }
>>          /* Save the first error... */
>>          if ( !ret )
>> -            ret = err;
>> +            ret = rc;
>>          /* .. and ignore the rest of them when removing. */
>> -        if ( err && add_mapping != DPCI_REMOVE_MAPPING )
>> +        if ( rc && add_mapping != DPCI_REMOVE_MAPPING )
>>              break;
> 
> Would it make more sense to structure this something like this:
> ---
> rc = do_domctl()
> 
> if( rc < 0 ) {
>   if ( errno == E2BIG ) { blah; continue; }
>   /* Other error stuff */
> } else if ( rc > 0 )
>    nr = rc;
> 
> done += nr;
> ---

Not sure - I aimed at minimal changes to the existing code (but I
may well have missed that goal). I guess I'll wait for a tools
maintainer's opinion...

>> @@ -2045,22 +2091,47 @@ int map_mmio_regions(struct domain *d,
>>  {
>>      int ret = 0;
>>      unsigned long i;
>> +    unsigned int iter, order;
>>  
>>      if ( !paging_mode_translate(d) )
>>          return 0;
>>  
>> -    for ( i = 0; !ret && i < nr; i++ )
>> +    for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
>> +          i += 1UL << order, ++iter )
>>      {
>> -        ret = set_mmio_p2m_entry(d, start_gfn + i, _mfn(mfn + i),
>> -                                 p2m_get_hostp2m(d)->default_access);
>> -        if ( ret )
>> +        for ( order = mmio_order(d, (start_gfn + i) | (mfn + i), nr - i); ;
>> +              order = ret - 1 )
> 
> I think you need a comment explaining what this (start_gfn + 1 ) | (mfn
> + i) stuff is about; maybe something like:
> 
> /* OR'ing the gfn and mfn values will return an order suitable to both */
> 
> (I assume that's what's going on here?)

Yes. Comments added (once per function using mmio_order()).

> No comments otherwise.

What a pity - I had hoped for some response on at least the middle
of the three RFC reasons.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.