[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] 32on64: increase size of compat argument translation area to 2 pages
# HG changeset patch # User Ian Campbell <ian.campbell@xxxxxxxxxx> # Date 1246625635 -3600 # Node ID cb45d91651df9b27d01a807a2be40e8a3460876f # Parent 31002ac6a13caadab7c163d387af558a2b2da7ce 32on64: increase size of compat argument translation area to 2 pages. The existing single page is not quite large enough to translate a XENMEM_exchange hypercall with order=9. Since Linux uses MAX_CONTIG_ORDER of 9 this seems like a reasonable upper bound to support. Increasing COMPAT_ARG_XLAT_SIZE to 2 pages is sufficient to support order 9 exchanges. PERCPU_SHIFT must also be increased since the translation area is percpu. This was observed through a driver which did a large pci_alloc_consistent request. Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx> diff -r 31002ac6a13c -r cb45d91651df xen/include/asm-x86/percpu.h --- a/xen/include/asm-x86/percpu.h Thu Jul 02 09:43:41 2009 +0100 +++ b/xen/include/asm-x86/percpu.h Fri Jul 03 13:53:55 2009 +0100 @@ -1,7 +1,7 @@ #ifndef __X86_PERCPU_H__ #define __X86_PERCPU_H__ -#define PERCPU_SHIFT 13 +#define PERCPU_SHIFT 14 #define PERCPU_SIZE (1UL << PERCPU_SHIFT) /* Separate out the type, so (int[3], foo) works. */ diff -r 31002ac6a13c -r cb45d91651df xen/include/asm-x86/x86_64/uaccess.h --- a/xen/include/asm-x86/x86_64/uaccess.h Thu Jul 02 09:43:41 2009 +0100 +++ b/xen/include/asm-x86/x86_64/uaccess.h Fri Jul 03 13:53:55 2009 +0100 @@ -2,7 +2,7 @@ #define __X86_64_UACCESS_H #define COMPAT_ARG_XLAT_VIRT_BASE this_cpu(compat_arg_xlat) -#define COMPAT_ARG_XLAT_SIZE PAGE_SIZE +#define COMPAT_ARG_XLAT_SIZE 2*PAGE_SIZE DECLARE_PER_CPU(char, compat_arg_xlat[COMPAT_ARG_XLAT_SIZE]); #define is_compat_arg_xlat_range(addr, size) ({ \ unsigned long __off; \ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |