|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 4/6] x86: bring up all CPUs even if not all are supposed to be used
>>> On 19.07.18 at 01:48, <konrad.wilk@xxxxxxxxxx> wrote:
> On Wed, Jul 18, 2018 at 02:21:53AM -0600, Jan Beulich wrote:
>> Reportedly Intel CPUs which can't broadcast #MC to all targeted
>> cores/threads because some have CR4.MCE clear will shut down. Therefore
>> we want to keep CR4.MCE enabled when offlining a CPU, and we need to
>> bring up all CPUs in order to be able to set CR4.MCE in the first place.
>>
>> The use of clear_in_cr4() in cpu_mcheck_disable() was ill advised
>> anyway, and to avoid future similar mistakes I'm removing clear_in_cr4()
>> altogether right here.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> v2: Use ROUNDUP().
>> ---
>> Instead of fully bringing up CPUs and then calling cpu_down(), another
>> option would be to suppress/cancel full bringup in smp_callin(). But I
>> guess we should try to keep things simple for now, and see later whether
>> this can be "optimized".
>> ---
>> Note: The parked CPUs can be brought online (i.e. the meaning of
>> "maxcpus=" isn't as strict anymore as it was before), but won't
>> immediately be used for scheduling pre-existing Dom0 CPUs. That's
>> because dom0_setup_vcpu() artifically restricts the affinity. For
>> DomU-s whose affinity was not artifically restricted, no such
>> limitation exists, albeit the shown "soft" affinity appears to
>> suffer a similar issue. As that's not a goal of this patch, I've
>> put the issues on the side for now, perhaps for someone else to
>> take care of.
>> Note: On one of my test systems the parked CPUs get _PSD data reported
>> by Dom0 that is different from the non-parked ones (coord_type is
>> 0xFC instead of 0xFE). Giving Dom0 enough vCPU-s eliminates this
>
> From drivers/xen/xen-acpi-processor.c:
>
> 181 /* 'acpi_processor_preregister_performance' does not parse if the
>
> 182 * num_processors <= 1, but Xen still requires it. Do it manually
> here.
> 183 */
>
> 184 if (pdomain->num_processors <= 1) {
>
> 185 if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ALL)
>
> 186 dst->shared_type = CPUFREQ_SHARED_TYPE_ALL;
>
> 187 else if (pdomain->coord_type == DOMAIN_COORD_TYPE_HW_ALL)
>
> 188 dst->shared_type = CPUFREQ_SHARED_TYPE_HW;
>
> 189 else if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ANY)
>
> 190 dst->shared_type = CPUFREQ_SHARED_TYPE_ANY;
>
> 191
>
> 192 }
>
> ?
Yes, I had found that code, but pdomain->num_processors shouldn't
depend on the number of _v_CPU-s in Dom0 afaict. Yet as said - the
problem went away when running Dom0 with as many vCPU-s as
there are onlined _and_ parked threads / cores. When I get back to
debug this further (unless an explanation / solution turns up earlier)
I could certainly go and double check whether this code comes into
play at all, and if so whether it has a bad effect in the particular case
here.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |