|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] x86/intel: ensure Global Performance Counter Control is setup correctly
On Thu, Jan 11, 2024 at 11:11:07AM +0100, Jan Beulich wrote:
> On 11.01.2024 10:08, Roger Pau Monne wrote:
> > When Architectural Performance Monitoring is available, the PERF_GLOBAL_CTRL
> > MSR contains per-counter enable bits that is ANDed with the enable bit in
> > the
> > counter EVNTSEL MSR in order for a PMC counter to be enabled.
> >
> > So far the watchdog code seems to have relied on the PERF_GLOBAL_CTRL enable
> > bits being set by default, but at least on some Intel Sapphire and Emerald
> > Rapids this is no longer the case, and Xen reports:
> >
> > Testing NMI watchdog on all CPUs: 0 40 stuck
> >
> > The first CPU on each package is started with PERF_GLOBAL_CTRL zeroed, so
> > PMC0
> > doesn't start counting when the enable bit in EVNTSEL0 is set, due to the
> > relevant enable bit in PERF_GLOBAL_CTRL not being set.
> >
> > Check and adjust PERF_GLOBAL_CTRL during CPU initialization so that all the
> > general-purpose PMCs are enabled. Doing so brings the state of the
> > package-BSP
> > PERF_GLOBAL_CTRL in line with the rest of the CPUs on the system.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> > Changes since v1:
> > - Do the adjustment of PERF_GLOBAL_CTRL even if the watchdog is not used,
> > and
> > enable all counters.
> > ---
> > Unsure whether printing a warning if the value of PERF_GLOBAL_CTRL is not
> > correct is of any value, hence I haven't added it.
> > ---
> > xen/arch/x86/cpu/intel.c | 18 +++++++++++++++++-
> > 1 file changed, 17 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
> > index dfee64689ffe..40d3eb0e18a7 100644
> > --- a/xen/arch/x86/cpu/intel.c
> > +++ b/xen/arch/x86/cpu/intel.c
> > @@ -533,9 +533,25 @@ static void cf_check init_intel(struct cpuinfo_x86 *c)
> > init_intel_cacheinfo(c);
> > if (c->cpuid_level > 9) {
> > unsigned eax = cpuid_eax(10);
> > + unsigned int cnt = (uint8_t)(eax >> 8);
> > +
> > /* Check for version and the number of counters */
> > - if ((eax & 0xff) && (((eax>>8) & 0xff) > 1))
> > + if ((eax & 0xff) && (cnt > 1) && (cnt <= 32)) {
>
> I may not have looked closely enough, but I didn't find the limit of
> 32 being stated anywhere.
Hm, my copy of the SDM vol4 states that bits 31:n are the enable bits,
where n is CPUID.0AH: EAX[15:8]. Bits 32, 33 and 34 control the Fixed
PMCs.
> > + uint64_t global_ctrl;
> > + unsigned int cnt_mask = (1UL << cnt) - 1;
>
> Bits 2 + 4 * n have an additional qualification as per SDM vol 4.
Let me update my copy...
Looking at the version from December 2023, I see:
0 EN_PMC0 If CPUID.0AH: EAX[15:8] > 0
1 EN_PMC1 If CPUID.0AH: EAX[15:8] > 1
2 EN_PMC2 If CPUID.0AH: EAX[15:8] > 2
n EN_PMCn If CPUID.0AH: EAX[15:8] > n
31:n+1 Reserved.
And I'm afraid I'm not able to infer this additional qualification of
bits 2 + 4 * n.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |