[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 15/22] xen/arm: p2m: Re-implement relinquish_p2m_mapping using p2m_get_entry

On Tue, 6 Sep 2016, Julien Grall wrote:
> Hi Stefano,
> On 05/09/16 22:58, Stefano Stabellini wrote:
> > On Thu, 28 Jul 2016, Julien Grall wrote:
> > > The current implementation of relinquish_p2m_mapping is modifying the
> > > page table to erase the entry one by one. However, this is not necessary
> > > because the domain is not running anymore and therefore will speed up
> > > the domain destruction.
> > 
> > Could you please elaborate on this? Who is going to remove the p2m
> > entries if not this function?
> The current version of relinquish is removing the reference on the page and
> then invalidate the entry (which may involve a cache flush).
> As the page tables are not used anymore by the hardware, the latter action is
> not necessary. This is an optimization because flushing the cache can be
> expensive. However as mentioned later in the commit message, we need to have a
> think on how the other helpers interact with the page table to avoid return
> wrong entry.

The idea is that nobody will remove the p2m entries, until the whole p2m
is torn down (p2m_teardown)?

> I am thinking to defer this optimization for the next release (i.e Xen 4.9) to
> avoid rushing on it.

If we are sure that there are no interactions with the p2m between the
domain_relinquish_resources and the p2m_teardown call, then this is
acceptable. Otherwise delaying this optimization is wiser.

> > > The function relinquish_p2m_mapping can be re-implemented using
> > > p2m_get_entry by iterating over the range mapped and using the mapping
> > > order given by the callee.
> > > 
> > > Given that the preemption was chosen arbitrarily, it is no done on every
> >                                                           ^ now?
> Yes, will fix it in the next version.
> Regards,
> -- 
> Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.