[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2] xen/balloon: Fix mapping PG_offline pages to user space
The XEN balloon driver - in contrast to other balloon drivers - allows to map some inflated pages to user space. Such pages are allocated via alloc_xenballooned_pages() and freed via free_xenballooned_pages(). The pfn space of these allocated pages is used to map other things by the hypervisor using hypercalls. Pages marked with PG_offline must never be mapped to user space (as this page type uses the mapcount field of struct pages). So what we can do is, clear/set PG_offline when allocating/freeing an inflated pages. This way, most inflated pages can be excluded by dumping tools and the "reused for other purpose" balloon pages are correctly not marked as PG_offline. Fixes: 77c4adf6a6df (xen/balloon: mark inflated pages PG_offline) Reported-by: Julien Grall <julien.grall@xxxxxxx> Tested-by: Julien Grall <julien.grall@xxxxxxx> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> --- v1 -> v2: - Readd the braces dropped by accident :) drivers/xen/balloon.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 39b229f9e256..d37dd5bb7a8f 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -604,6 +604,7 @@ int alloc_xenballooned_pages(int nr_pages, struct page **pages) while (pgno < nr_pages) { page = balloon_retrieve(true); if (page) { + __ClearPageOffline(page); pages[pgno++] = page; #ifdef CONFIG_XEN_HAVE_PVMMU /* @@ -645,8 +646,10 @@ void free_xenballooned_pages(int nr_pages, struct page **pages) mutex_lock(&balloon_mutex); for (i = 0; i < nr_pages; i++) { - if (pages[i]) + if (pages[i]) { + __SetPageOffline(pages[i]); balloon_append(pages[i]); + } } balloon_stats.target_unpopulated -= nr_pages; -- 2.17.2 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |