[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/3] x86/smt: Support for enabling/disabling SMT at runtime
On 03/04/2019 10:33, Jan Beulich wrote: >>>> On 02.04.19 at 21:57, <andrew.cooper3@xxxxxxxxxx> wrote: >> Currently, a user can in combine the output of `xl info -n`, the APCI tables, >> and some manual CPUID data to figure out which CPU numbers to feed into >> `xen-hptool cpu-offline` to effectively disable SMT at runtime. >> >> A more convenient option is to teach Xen how to perform this action. >> >> First of all, extend XEN_SYSCTL_cpu_hotplug with two new operations. >> Introduce new smt_{up,down}_helper() functions which wrap the >> cpu_{up,down}_helper() helpers with logic which understands siblings based on >> their APIC_ID. >> >> Add libxc stubs, and extend xen-hptool with smt-{enable,disable} options. >> These are intended to be shorthands for a loop over cpu-{online,offline}. >> >> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >> --- >> CC: Jan Beulich <JBeulich@xxxxxxxx> >> CC: Wei Liu <wei.liu2@xxxxxxxxxx> >> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> >> >> Slightly RFC. I'm not very happy with the contination situation, but -EBUSY >> is the preexisting style and it seems like it is the only option from tasklet >> context. > Well, offloading the re-invocation to the caller isn't really nice. > Looking at the code, is there any reason why couldn't use > the usual -ERESTART / hypercall_create_continuation()? This > would require a little bit of re-work, in particular to allow > passing the vCPU into hypercall_create_continuation(), but > beyond that I can't see any immediate obstacles. Though > clearly I wouldn't make this a prereq requirement for the work > here. The problem isn't really the ERESTART. We could do some plumbing and make it work, but the real problem is that I can't stash the current cpu index in the sysctl data block across the continuation point. At the moment, the loop depends on, once all CPUs are in the correct state, getting through the for_each_present_cpu() loop without taking a further continuation. > >> Is it intentional that we can actually online and offline processors beyond >> maxcpu? This is a consequence of the cpu parking logic. > I think so, yes. That's meant to be a boot time limit only imo. > The runtime limit is nr_cpu_ids. > >> --- a/xen/arch/x86/setup.c >> +++ b/xen/arch/x86/setup.c >> @@ -60,7 +60,7 @@ static bool __initdata opt_nosmp; >> boolean_param("nosmp", opt_nosmp); >> >> /* maxcpus: maximum number of CPUs to activate. */ >> -static unsigned int __initdata max_cpus; >> +unsigned int max_cpus; >> integer_param("maxcpus", max_cpus); > As per above I don't think this change should be needed or > wanted, but if so for whatever reason, wouldn't the variable > better be __read_mostly? __read_mostly, yes, but as to whether the change is needed, that entirely depends on whether the change in semantics to maxcpus= was accidental or intentional. > >> --- a/xen/arch/x86/sysctl.c >> +++ b/xen/arch/x86/sysctl.c >> @@ -114,6 +114,92 @@ long cpu_down_helper(void *data) >> return ret; >> } >> >> +static long smt_up_helper(void *data) >> +{ >> + unsigned int cpu, sibling_mask = >> + (1u << (boot_cpu_data.x86_num_siblings - 1)) - 1; > I don't think this is quite right for higher than 2-thread configurations. > In detect_extended_topology() terms, don't you simply mean > (1u << ht_mask_width) - 1 here, i.e. just > boot_cpu_data.x86_num_siblings - 1 (without any shifting)? Good point, yes. > >> + int ret = 0; >> + >> + if ( !cpu_has_htt || !sibling_mask ) >> + return -EOPNOTSUPP; > Why not put the first part of the check right into the sysctl > handler? Can do. I think this is a side effect of how it developed. > >> + opt_smt = true; > Perhaps also bail early when the variable already has the > designated value? And again perhaps right in the sysctl > handler? That is not safe across continuations. While it would be a very silly thing to do, there could be two callers which are fighting over whether SMT is disabled or enabled. > >> + for_each_present_cpu ( cpu ) >> + { >> + if ( cpu == 0 ) >> + continue; > Is this special case really needed? If so, perhaps worth a brief > comment? Trying to down cpu 0 is a hard -EINVAL. > >> + if ( cpu >= max_cpus ) >> + break; >> + >> + if ( x86_cpu_to_apicid[cpu] & sibling_mask ) >> + ret = cpu_up_helper(_p(cpu)); > Shouldn't this be restricted to CPUs a sibling of which is already > online? And widened at the same time, to also online thread 0 > if one of the other threads is already online? Unfortunately, that turns into a rats nest very very quickly, which is why I gave up and simplified the semantics to strictly "this shall {of,off}line the nonzero siblings threads". This is a convenience for people wanting to do a one-time reconfiguration of the system, and indeed, how has multiple end user requests behind its coming into existence. Users who are already hotplugging aren't going to be interested in this functionality. As the usecases don't overlap, I went for the most simple logic. > Also any reason you use _p() here but not in patch 2? I thought I'd fixed patch 2 up, but I clearly hadn't > I also notice that the two functions are extremely similar, and > hence it might be worthwhile considering to fold them, with the > caller controlling the behavior via the so far unused function > parameter (at which point the related remark of mine on patch > 2 would become inapplicable). By passing the plug boolean in via data? Yes I suppose they are rather more similar than they started out. > >> --- a/xen/include/public/sysctl.h >> +++ b/xen/include/public/sysctl.h >> @@ -246,8 +246,17 @@ struct xen_sysctl_get_pmstat { >> struct xen_sysctl_cpu_hotplug { >> /* IN variables */ >> uint32_t cpu; /* Physical cpu. */ >> + >> + /* Single CPU enable/disable. */ >> #define XEN_SYSCTL_CPU_HOTPLUG_ONLINE 0 >> #define XEN_SYSCTL_CPU_HOTPLUG_OFFLINE 1 >> + >> + /* >> + * SMT enable/disable. Caller must zero the 'cpu' field to begin, and >> + * ignore it on completion. >> + */ >> +#define XEN_SYSCTL_CPU_HOTPLUG_SMT_ENABLE 2 >> +#define XEN_SYSCTL_CPU_HOTPLUG_SMT_DISABLE 3 > Is the "cpu" field constraint mentioned in the comment just a > precaution? I can't see you encode anything into that field, or > use it upon getting re-invoked. I assume that's because of the > expectation that only actual onlining/offlining would potentially > take long, while iterating over all present CPUs without further > action ought to be fast enough. Ah - that was stale from before I encountered the "fun" of continuations from tasklet context. I would prefer to find a better way, but short of doing a full vcpu context switch, I don't see an option. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |