[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Reducing or removing direct map from xen (was Re: Ongoing/future speculative mitigation work)



On Wed, Feb 20, 2019 at 02:00:52PM +0100, Roger Pau Monné wrote:
> On Wed, Feb 20, 2019 at 12:29:01PM +0000, Wei Liu wrote:
> > On Thu, Jan 24, 2019 at 11:44:55AM +0000, Wei Liu wrote:
> > > 3. Implement xenheap using vmap infrastructure
> > > 
> > > This helps preserve xenheap's "always mapped" property. At the moment,
> > > vmap relies on xenheap, we want to turn this relationship around.
> > > 
> > > There is a loop what needs breaking in the new world:
> > > 
> > >   alloc_xenheap_pages -> vmap -> __vmap -> map_pages_to_xen ->
> > >     virt_to_xen_l1e -> alloc_xen_pagetable -> alloc_xenheap_page -> vmap 
> > > ...
> > > 
> > > Two options were proposed to break this loop:
> > > 
> > >   3.1 Pre-populate all page tables for vmap region
> > 
> > Now that we have this ...
> > 
> > >   3.2 Switch page table allocation to use domheap page
> > > 
> > > 
> > > The other work item is to track page<->virt relationship so that
> > > conversion functions (_to_virt etc) continue to work. For PoC purpose,
> > > putting a void * into page_info is good enough. But in the future we
> > > want to have a separate array for tracking so that page_info stays power
> > > of two in size.
> > > 
> > 
> > I started working on some prototyping code for the rest of this major
> > work item. Conversion functions are a bit messy to deal with (I have no
> > idea whether my modifications are totally correct at this point), but
> > the most major issue I see is an optimisation done by xmalloc which
> > isn't compatible with vmap.
> > 
> > So xmalloc has this optimisation: it will allocate a high-order page
> > from xenheap when necessary and then attempt to break that up and return
> > the unused portion.  Vmap uses bitmap to track address space usage, and
> > it mandates a guard page before every address space allocation. What
> > xmalloc does is to free a portion of the address space, which isn't
> > really supported by vmap.
> > 
> > I came up with two options yesterday:
> > 
> > 1. Remove the optimisation in xmalloc
> > 2. Make vmap able to break up allocation
> > 
> > Neither looks great to me. The first is simple but potentially wasteful
> > (how much is wasted?). The second requires non-trivial modification to
> > vmap, essentially removing the mandatory guard page. In comparison the
> > first is easier and safer.
> > 
> > I would like to hear people's thought on this. Comments are welcome.
> 
> The PV dom0 builder does something similar to this, it tries to
> allocate a page that has an order equal or higher than the order of
> the request size, and then frees up the unused part.
> 
> I've used another approach for the PVH dom0 builder, which is to never
> allocate more than what's required, and instead always under-allocate.
> This has the benefit of not splitting high order pages, but requires
> multiple calls to the allocation function. See
> pvh_populate_memory_range in hvm/dom0_build.c and it's usage of
> get_order_from_pages. I think a similar approach could be implemented
> in xmalloc?
> 

The usage in PV dom0 build is not an issue because those pages are
domheap pages. On a related topic, I have to fix that instance since it
treats domheap pages like xenheap pages, which will be very wrong in the
future.

Your example of PVH dom0 build uses domheap pages too, so that's not an
issue.

I think under-allocate-then-map looks plausible. xmalloc will need
to allocate pages, put them into an array and call __vmap on that array
directly.

Wei.

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.