[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/boot: Fix early exception handling with CONFIG_PERF_COUNTERS
On 15.04.2020 19:39, Andrew Cooper wrote: > The PERFC_INCR() macro uses current->processor, but current is not valid > during early boot. This causes the following crash to occur if > e.g. rdmsr_safe() has to recover from a #GP fault. > > (XEN) Early fatal page fault at e008:ffff82d0803b1a39 > (cr2=0000000000000004, ec=0000) > (XEN) ----[ Xen-4.14-unstable x86_64 debug=y Not tainted ]---- > (XEN) CPU: 0 > (XEN) RIP: e008:[<ffff82d0803b1a39>] > x86_64/entry.S#handle_exception_saved+0x64/0xb8 > ... > (XEN) Xen call trace: > (XEN) [<ffff82d0803b1a39>] R > x86_64/entry.S#handle_exception_saved+0x64/0xb8 > (XEN) [<ffff82d0806394fe>] F __start_xen+0x2cd/0x2980 > (XEN) [<ffff82d0802000ec>] F __high_start+0x4c/0x4e > > Furthermore, the PERFC_INCR() macro is wildly inefficient. There has been a > single caller for many releases now, so inline it and delete the macro > completely. > > For the assembly, move entry_vector from %eax into %ecx. There is no encoding > benefit for movzbl, and it frees up %eax to be used inside the > CONFIG_PERF_COUNTERS block where there is an encoding benefit. I don't mind the change in register use, but I'd certainly like to understand the supposed "encoding benefit". Afaics ... > --- a/xen/arch/x86/x86_64/entry.S > +++ b/xen/arch/x86/x86_64/entry.S > @@ -677,10 +677,14 @@ handle_exception_saved: > #endif /* CONFIG_PV */ > sti > 1: movq %rsp,%rdi > - movzbl UREGS_entry_vector(%rsp),%eax > + movzbl UREGS_entry_vector(%rsp), %ecx > leaq exception_table(%rip),%rdx > - PERFC_INCR(exceptions, %rax, %rbx) > - mov (%rdx, %rax, 8), %rdx > +#ifdef CONFIG_PERF_COUNTERS > + lea per_cpu__perfcounters(%rip), %rax > + add STACK_CPUINFO_FIELD(per_cpu_offset)(%r14), %rax > + incl ASM_PERFC_exceptions * 4(%rax, %rcx, 4) > +#endif ... all insns inside the block use a ModR/M byte, and there's also no special SIB encoding form used (i.e. one where the disp size would increase because of register choice). With this clarified, i.e. possibly the commit message updated accordingly, and possibly even code churn reduced by undoing the change of register used, Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |