[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 13/16] xen/page_alloc: add a path for xenheap when there is no direct map
On 30.04.2020 22:44, Hongyan Xia wrote: > From: Hongyan Xia <hongyxia@xxxxxxxxxx> > > When there is not an always-mapped direct map, xenheap allocations need > to be mapped and unmapped on-demand. > > Signed-off-by: Hongyan Xia <hongyxia@xxxxxxxxxx> This series has been left uncommented for far too long - I'm sorry. While earlier patches here are probably reasonable (but would likely need re-basing, so I'm not sure whether to try to get to look though them before that makes much sense), I'd like to spell out that I'm not really happy with the approach taken here: Simply re-introducing a direct map entry for individual pages is not the way to go imo. First and foremost this is rather wasteful in terms of resources (VA space). As I don't think we have many cases where code actually depends on being able to apply __va() (or equivalent) to the address returned from alloc_xenheap_pages(), I think this should instead involve vmap(), with the vmap area drastically increased (perhaps taking all of the space the direct map presently consumes). For any remaining users of __va() or alike these should perhaps be converted into an alias / derivation of vmap_to_{mfn,page}() then. Since the goal of eliminating the direct map is to have unrelated guests' memory not mapped when running a certain guest, it could then be further considered to "overmap" what is being requested - rather than just mapping the single 4k page, the containing 2M or 1G one could be mapped (provided it all belongs to the running guest), while unmapping could be deferred until the next context switch to a different domain (or, if necessary, for 64-bit PV guests until the next switch to guest user mode). Of course this takes as a prereq a sufficiently low overhead means to establish whether the larger containing page of a smaller one is all owned by the same domain. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |