[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] Ensure we initialise the cpu_present_map before
# HG changeset patch # User kaf24@xxxxxxxxxxxxxxxxxxxx # Node ID 5e111356ba17602f474e36da6571de490157981a # Parent 2acbe70dd418a963fc15d6b3f5bd0ecf76881f50 Ensure we initialise the cpu_present_map before topology_init() is called. In latest Linux kernels it iterates over cpu_present_map rather than cpu_possible_map. Also, save/restore should really iterate over possible cpus, not present ones (not that it really matters, but it's a small cleanup). Signed-off-by: Keir Fraser <keir@xxxxxxxxxxxxx> diff -r 2acbe70dd418 -r 5e111356ba17 linux-2.6-xen-sparse/arch/xen/kernel/reboot.c --- a/linux-2.6-xen-sparse/arch/xen/kernel/reboot.c Tue Nov 15 16:30:55 2005 +++ b/linux-2.6-xen-sparse/arch/xen/kernel/reboot.c Tue Nov 15 17:43:28 2005 @@ -188,7 +188,7 @@ xenbus_resume(); #ifdef CONFIG_SMP - for_each_present_cpu(i) + for_each_cpu(i) vcpu_prepare(i); out_reenable_cpus: diff -r 2acbe70dd418 -r 5e111356ba17 linux-2.6-xen-sparse/arch/xen/kernel/smpboot.c --- a/linux-2.6-xen-sparse/arch/xen/kernel/smpboot.c Tue Nov 15 16:30:55 2005 +++ b/linux-2.6-xen-sparse/arch/xen/kernel/smpboot.c Tue Nov 15 17:43:28 2005 @@ -277,6 +277,18 @@ #ifdef CONFIG_HOTPLUG_CPU +/* + * Initialize cpu_present_map late to skip SMP boot code in init/main.c. + * But do it early enough to catch critical for_each_present_cpu() loops + * in i386-specific code. + */ +static int __init initialize_cpu_present_map(void) +{ + cpu_present_map = cpu_possible_map; + return 0; +} +core_initcall(initialize_cpu_present_map); + static void vcpu_hotplug(unsigned int cpu) { int err; @@ -293,7 +305,6 @@ } if (strcmp(state, "online") == 0) { - cpu_set(cpu, cpu_present_map); (void)cpu_up(cpu); } else if (strcmp(state, "offline") == 0) { (void)cpu_down(cpu); _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |