[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/nmi: correctly check MSB of P6 performance counter MSR in watchdog
>>> On 26.02.19 at 16:13, <igor.druzhinin@xxxxxxxxxx> wrote: > On 26/02/2019 14:58, Jan Beulich wrote: >>>>> On 26.02.19 at 14:25, <andrew.cooper3@xxxxxxxxxx> wrote: >>> On 26/02/2019 13:12, Igor Druzhinin wrote: >>> >>>> @@ -323,6 +327,10 @@ static void setup_p6_watchdog(unsigned counter) >>>> unsigned int evntsel; >>>> >>>> nmi_perfctr_msr = MSR_P6_PERFCTR(0); >>>> + if ( current_cpu_data.cpuid_level >= 0xa ) >>>> + nmi_p6_event_width = MASK_EXTR(cpuid_eax(0xa), > P6_EVENT_WIDTH_MASK); >>>> + if ( !nmi_p6_event_width ) >>>> + nmi_p6_event_width = P6_EVENT_WIDTH_DEFAULT; >>> >>> This is called on each cpu, but writes to a shared variable. This >>> entire block wants to be something like: >>> >>> if ( !nmi_p6_event_width ) >>> nmi_p6_event_width = (current_cpu_data.cpuid_level >= 0xa) ? >>> MASK_EXTR(cpuid_eax(0xa), P6_EVENT_WIDTH_MASK) : P6_EVENT_WIDTH_DEFAULT; >>> >>> (suitably wrapped) so it gets filled only once. >> >> Plus the low and high capping of the value read, as requested in >> reply to v1. > > Isn't it the counter clipping you requested (which was implemented in > v2)? What is the point in high capping? Is it safe to continue if the > value read is lower than P6_EVENT_WIDTH_DEFAULT or should I panic here? I think panic()ing is an opportune action in this situation, but I wouldn't mind just going with 32 instead in this case, or failing to enable the watchdog. On real hardware the value isn't going to be any lower, and in virtualized environments we'd have to figure why the value is smaller anyway. As to the high capping - as said in the v1 thread, I'd like this to be there in particular for the shift operations to not become undefined. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |