[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 08/10] xen/arm: Add relinquish_p2m_mapping to remove reference on every mapped page





On 12/18/2013 01:21 PM, Ian Campbell wrote:
(the cc line here is a bit odd, why Ian J? But not Tim or Stefano? I've
added those two) On Tue, 2013-12-17 at 17:02 +0000, Ian Campbell wrote:
For the last item, I think it's a bit stupid to create table if we are
removing/relinquish mapping. But I think it's an improvement for later.
There are lots of improvement to do in this function (eg: flushing).

Agreed.

Actually, there is an efficiency concern here.

If we were to skip non-present first and second levels then we would
skip vast swathes of the address space very quickly. As it stands we
actively spend time filling it in just so we can recurse over it.

This effectively turns this back into a loop over the entire gfn space,
which is what we wanted to avoid.

The use of next_gfn_to_relinquish..max_mapped_gfn mitigates this
somewhat, but max_mapped_gfn is guest controlled and can trivially be
made huge by the guest.

How hard would it be to skip missing entries for remove and relinquish
walks? Should just be roughtly:

   int populate = (op == INSERT || op == ALLOCATE)

   for ( addr = ... ; addr < ... ; addr ... )
   {
     [...]

     if ( !first[...].valid )
     {
        if ( !populate ) {
         addr += skip size of a first supermapping,
          continue;
        }
        rc = p2m_create(...)
        [..error..]

        [same for second etc]
     }
   }

"size of a first" needs care vs the += PAGE_SIZE in the for loop, I'd be
inclined to turn this into a while loop and move the += PAGE_SIZE to the
end.

Your solution should be fine. I will add a patch in the series.

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.