[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 3/7] x86: initialize per socket cpu map
>>> On 28.01.14 at 15:12, "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> wrote: >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@xxxxxxxxx> wrote: >> > For each socket in the system, we create a separate bitmap to tag its >> > related CPUs. This per socket bitmap will be initialized on system >> > start up, and adjusted when CPU is dynamically online/offline. >> >> There's no reasoning here at all why cpu_sibling_mask and >> cpu_core_mask aren't sufficient. > > The new mask is to mark socket CPUs, and they may be different with > cpu_sibling_mask and cpu_core_mask... Sorry, I don't follow: cpu_core_mask represents all cores sitting on the same socket as the "owning" CPU. How's that different from "marking socket CPUs"? >> > --- a/xen/arch/x86/smpboot.c >> > +++ b/xen/arch/x86/smpboot.c >> > @@ -59,6 +59,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, >> cpu_core_mask); >> > cpumask_t cpu_online_map __read_mostly; >> > EXPORT_SYMBOL(cpu_online_map); >> > >> > +cpumask_t socket_cpu_map[MAX_NUM_SOCKETS] __read_mostly; >> > +EXPORT_SYMBOL(socket_cpu_map); >> >> And _if_ we really need it, then it should be done in a better way >> than via a statically sized array, the size of which can't even be >> overridden on the build and/or hypervisor command line. > > I saw current Xen code uses a lot of such static macros, e.g., NR_CPUS. For one, the number of these has been decreasing over time. And then NR_CPUS _can_ be controlled from the make command line. > This reminds me one thing, can we define the MAX_NUM_SOCKETS as NR_CPUS? > Since the socket number could not exceeds the CPU number. That might be an option, but only if this construct is really needed in the first place. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |