[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [v2 0/1] Allow deferred page initialization for xen pv domains
Changelog v1 - v2 - Addressed coomment from Juergen Gross: fixed a comment, and moved after_bootmem from PV framework to x86_init.hyper. From this discussion: https://www.spinics.net/lists/linux-mm/msg145604.html I investigated whether it is feasible to re-enable deferred page initialization on xen's para-vitalized domains. After studying the code, I found non-intrusive way to do just that. All we need to do is to assume that page-table's pages are pinned early in boot, which is always true, and add a new x86_init.hyper OP call to notify guests that boot allocator is finished, so we can set all the necessary fields in already initialized struct pages. I have tested this on my laptop with 64-bit kernel, but I would appreciate if someone could provide more xen testing. Apply against: linux-next. Enable the following configs: CONFIG_XEN_PV=y CONFIG_DEFERRED_STRUCT_PAGE_INIT=y The above two are needed to test deferred page initialization on PV Xen domains. If fix is applied correctly, dmesg should output line(s) like this during boot: [ 0.266180] node 0 initialised, 717570 pages in 36ms CONFIG_DEBUG_VM=y This is needed to poison struct page's memory, otherwise it would be all zero. CONFIG_DEBUG_VM_PGFLAGS=y Verifies that we do not access struct pages flags while memory is still poisoned (struct pages are not initialized yet). Pavel Tatashin (1): xen, mm: Allow deferred page initialization for xen pv domains arch/x86/include/asm/x86_init.h | 2 ++ arch/x86/kernel/x86_init.c | 1 + arch/x86/mm/init_32.c | 1 + arch/x86/mm/init_64.c | 1 + arch/x86/xen/mmu_pv.c | 38 ++++++++++++++++++++++++++------------ mm/page_alloc.c | 4 ---- 6 files changed, 31 insertions(+), 16 deletions(-) -- 2.16.2 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |