[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] x86: remove X86_INTEL_USERCOPY code
commit a9758fb369c558dacccf67800b178ab77a17d011 Author: Matt Wilson <msw@xxxxxxxxxx> AuthorDate: Fri Aug 30 10:54:00 2013 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Fri Aug 30 10:54:00 2013 +0200 x86: remove X86_INTEL_USERCOPY code Nothing defines CONFIG_X86_INTEL_USERCOPY, and as far as I can tell it was never used even when Xen supported 32-bit x86. Signed-off-by: Matt Wilson <msw@xxxxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- xen/arch/x86/cpu/intel.c | 21 --------------------- 1 files changed, 0 insertions(+), 21 deletions(-) diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c index 9b71d36..072ecbc 100644 --- a/xen/arch/x86/cpu/intel.c +++ b/xen/arch/x86/cpu/intel.c @@ -18,13 +18,6 @@ #define select_idle_routine(x) ((void)0) -#ifdef CONFIG_X86_INTEL_USERCOPY -/* - * Alignment at which movsl is preferred for bulk memory copies. - */ -struct movsl_mask movsl_mask __read_mostly; -#endif - static unsigned int probe_intel_cpuid_faulting(void) { uint64_t x; @@ -229,20 +222,6 @@ static void __devinit init_intel(struct cpuinfo_x86 *c) /* Work around errata */ Intel_errata_workarounds(c); -#ifdef CONFIG_X86_INTEL_USERCOPY - /* - * Set up the preferred alignment for movsl bulk memory moves - */ - switch (c->x86) { - case 6: /* PII/PIII only like movsl with 8-byte alignment */ - movsl_mask.mask = 7; - break; - case 15: /* P4 is OK down to 8-byte alignment */ - movsl_mask.mask = 7; - break; - } -#endif - if ((c->x86 == 0xf && c->x86_model >= 0x03) || (c->x86 == 0x6 && c->x86_model >= 0x0e)) set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability); -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |