[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v3] xen/arm: Do not allocate pte entries for MAP_SMALL_PAGES
From: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxxxxxxxxxx> On x86, for the pages mapped with PAGE_HYPERVISOR attribute non-leaf page tables are allocated with valid pte entries. and with MAP_SMALL_PAGES attribute only non-leaf page tables are allocated with invalid (valid bit set to 0) pte entries. However on arm this is not the case. On arm for the pages mapped with PAGE_HYPERVISOR and MAP_SMALL_PAGES both non-leaf and leaf level page table are allocated with valid bit in pte entries. This behaviour in arm makes common vmap code fail to allocate memory beyond 128MB as described below. In vm_init, map_pages_to_xen() is called for mapping vm_bitmap. Initially one page of vm_bitmap is allocated and mapped using PAGE_HYPERVISOR attribute. For the rest of vm_bitmap pages, MAP_SMALL_PAGES attribute is used to map. In ARM for both PAGE_HYPERVISOR and MAP_SMALL_PAGES, valid bit is set to 1 in pte entry for these mapping. In vm_alloc(), map_pages_to_xen() is failing for >128MB because for this next vm_bitmap page the mapping is already set in vm_init() with valid bit set in pte entry. So map_pages_to_xen() in ARM returns error. With this patch, MAP_SMALL_PAGES attribute will only allocate non-leaf page tables only and arch specific populate_pt_range() api is introduced to populate non-leaf page table entries for the requested pages. Here we use bit[16] in the attribute flag to know if leaf page tables should be allocated or not. This bit is set only for MAP_SMALL_PAGES attribute. Signed-off-by: Vijaya Kumar K<Vijaya.Kumar@xxxxxxxxxxxxxxxxxx> --- v3: - Fix typos in commit message - Introduce arch specific api populate_pt_range v2: - Rename PTE_INVALID to PAGE_PRESENT - Re-define PAGE_* macros with PAGE_PRESENT - Rename parameter ai to flags - Introduce macro to check present flag and extract attribute index values --- xen/arch/arm/mm.c | 18 +++++++++++++++--- xen/arch/x86/mm.c | 6 ++++++ xen/common/vmap.c | 2 +- xen/include/asm-arm/page.h | 25 +++++++++++++++++++++---- xen/include/xen/mm.h | 7 ++++++- 5 files changed, 49 insertions(+), 9 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 7d4ba0c..e0be36b 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -827,14 +827,15 @@ static int create_xen_table(lpae_t *entry) enum xenmap_operation { INSERT, - REMOVE + REMOVE, + RESERVE }; static int create_xen_entries(enum xenmap_operation op, unsigned long virt, unsigned long mfn, unsigned long nr_mfns, - unsigned int ai) + unsigned int flags) { int rc; unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE; @@ -859,13 +860,17 @@ static int create_xen_entries(enum xenmap_operation op, switch ( op ) { case INSERT: + case RESERVE: if ( third[third_table_offset(addr)].pt.valid ) { printk("create_xen_entries: trying to replace an existing mapping addr=%lx mfn=%lx\n", addr, mfn); return -EINVAL; } - pte = mfn_to_xen_entry(mfn, ai); + if ( op == RESERVE || !is_pte_present(flags) ) + break; + + pte = mfn_to_xen_entry(mfn, get_pte_flags(flags)); pte.pt.table = 1; write_pte(&third[third_table_offset(addr)], pte); break; @@ -898,6 +903,13 @@ int map_pages_to_xen(unsigned long virt, { return create_xen_entries(INSERT, virt, mfn, nr_mfns, flags); } + +int populate_pt_range(unsigned long virt, unsigned long mfn, + unsigned long nr_mfns) +{ + return create_xen_entries(RESERVE, virt, mfn, nr_mfns, 0); +} + void destroy_xen_mappings(unsigned long v, unsigned long e) { create_xen_entries(REMOVE, v, 0, (e - v) >> PAGE_SHIFT, 0); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index ca5369a..6e3cc24 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -5701,6 +5701,12 @@ int map_pages_to_xen( return 0; } +int populate_pt_range(unsigned long virt, unsigned long mfn, + unsigned long nr_mfns) +{ + return map_pages_to_xen(virt, mfn, nr_mfns, MAP_SMALL_PAGES); +} + void destroy_xen_mappings(unsigned long s, unsigned long e) { bool_t locking = system_state > SYS_STATE_boot; diff --git a/xen/common/vmap.c b/xen/common/vmap.c index 783cea3..739d468 100644 --- a/xen/common/vmap.c +++ b/xen/common/vmap.c @@ -40,7 +40,7 @@ void __init vm_init(void) bitmap_fill(vm_bitmap, vm_low); /* Populate page tables for the bitmap if necessary. */ - map_pages_to_xen(va, 0, vm_low - nr, MAP_SMALL_PAGES); + populate_pt_range(va, 0, vm_low - nr); } void *vm_alloc(unsigned int nr, unsigned int align) diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h index 3e7b0ae..f743003 100644 --- a/xen/include/asm-arm/page.h +++ b/xen/include/asm-arm/page.h @@ -61,10 +61,27 @@ #define DEV_WC BUFFERABLE #define DEV_CACHED WRITEBACK -#define PAGE_HYPERVISOR (WRITEALLOC) -#define PAGE_HYPERVISOR_NOCACHE (DEV_SHARED) -#define PAGE_HYPERVISOR_WC (DEV_WC) -#define MAP_SMALL_PAGES PAGE_HYPERVISOR +#define PAGE_PRESENT (0x1 << 16) +#define PAGE_NOT_PRESENT (0x0) + +/* bit[16] in the below representation can be used to know if + * PTE entry should be added or not. This is useful + * when ONLY non-leaf page table entries need to allocated. + * + * bits[2:0] of be below represent correponds to AttrIndx[2:0] + * i.e lpae_t.pt.ai[2:4] + * + * For readability purpose MAP_SMALL_PAGES is set with PAGE_NOT_PRESENT + * though PAGE_NOT_PRESENT is 0. + */ + +#define PAGE_HYPERVISOR (WRITEALLOC | PAGE_PRESENT) +#define PAGE_HYPERVISOR_NOCACHE (DEV_SHARED | PAGE_PRESENT) +#define PAGE_HYPERVISOR_WC (DEV_WC | PAGE_PRESENT) +#define MAP_SMALL_PAGES (WRITEALLOC | PAGE_NOT_PRESENT) + +#define is_pte_present(x) ((x) & PAGE_PRESENT) +#define get_pte_flags(x) ((x) & 0x7) /* * Stage 2 Memory Type. diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 6ea8b8c..1109c84 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -55,7 +55,12 @@ int map_pages_to_xen( unsigned long nr_mfns, unsigned int flags); void destroy_xen_mappings(unsigned long v, unsigned long e); - +/* + * Create only non-leaf page table entries for the + * page range in Xen virtual address space. + */ +int populate_pt_range(unsigned long virt, unsigned long mfn, + unsigned long nr_mfns); /* Claim handling */ unsigned long domain_adjust_tot_pages(struct domain *d, long pages); int domain_set_outstanding_pages(struct domain *d, unsigned long pages); -- 1.7.9.5 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |