[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] x86/livepatch: enable livepatching assembly source files
In order to be able to livepatch code from assembly files we need: * Proper function symbols from assembly code, including the size. * Separate sections for each function. However assembly code doesn't really have the concept of a function, and hence the code tends to chain different labels that can also be entry points. In order to be able to livepatch such code we need to enclose the assembly code into isolated function-like blocks, so they can be handled by livepatch. Introduce two new macros to do so, {START,END}_LP() that take a unique function like name, create the function symbol and put the code into a separate text section. Note that START_LP() requires a preceding jump before the section change, so that any preceding code that fallthrough correctly continues execution, as sections can be reordered. Chaining of consecutive livepatchable blocks will also require that the previous section jumps into the next one if required. A couple of shortcomings: * We don't check that the size of the section is enough to fit a jump instruction (ARCH_PATCH_INSN_SIZE). Some logic from the alternatives framework should be used to pad sections if required. * Any labels inside of a {START,END}_LP() section must not be referenced from another section, as the patching would break those. I haven't figured out a way to detect such references. We already use .L to denote local labels, but we would have to be careful. Some of the assembly entry points cannot be safely patched until it's safe to use jmp, as livepatch can replace a whole block with a jmp to a new address, and that won't be safe until speculative mitigations have been applied. I could also look into allowing livepatch of sections where jmp replacement is not safe by requesting in-place code replacement only, we could then maybe allow adding some nop padding to those sections in order to cope with the size increasing in further livepatches. So far this patch only contains two switched functions: restore_all_xen and common_interrupt. I don't really want to switch more code until we agree on the approach, so take this as a kind of RFC patch. Obviously conversion doesn't need to be done in one go, neither all assembly code need to be 'transformed' in this way. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- xen/arch/x86/include/asm/config.h | 14 ++++++++++++++ xen/arch/x86/x86_64/entry.S | 5 ++++- 2 files changed, 18 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h index fbc4bb3416bd..68e7fdfe3517 100644 --- a/xen/arch/x86/include/asm/config.h +++ b/xen/arch/x86/include/asm/config.h @@ -44,6 +44,20 @@ /* Linkage for x86 */ #ifdef __ASSEMBLY__ #define ALIGN .align 16,0x90 +#ifdef CONFIG_LIVEPATCH +#define START_LP(name) \ + jmp name; \ + .pushsection .text.name, "ax", @progbits; \ + name: +#define END_LP(name) \ + .size name, . - name; \ + .type name, @function; \ + .popsection +#else +#define START_LP(name) \ + name: +#define END_LP(name) +#endif #define ENTRY(name) \ .globl name; \ ALIGN; \ diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S index 7675a59ff057..c204634910c4 100644 --- a/xen/arch/x86/x86_64/entry.S +++ b/xen/arch/x86/x86_64/entry.S @@ -660,7 +660,7 @@ ENTRY(early_page_fault) ALIGN /* No special register assumptions. */ -restore_all_xen: +START_LP(restore_all_xen) /* * Check whether we need to switch to the per-CPU page tables, in * case we return to late PV exit code (from an NMI or #MC). @@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3) RESTORE_ALL adj=8 iretq +END_LP(restore_all_xen) ENTRY(common_interrupt) ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP @@ -687,6 +688,7 @@ ENTRY(common_interrupt) SPEC_CTRL_ENTRY_FROM_INTR /* Req: %rsp=regs, %r14=end, %rdx=0, Clob: acd */ /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ +START_LP(common_interrupt_lp) mov STACK_CPUINFO_FIELD(xen_cr3)(%r14), %rcx mov STACK_CPUINFO_FIELD(use_pv_cr3)(%r14), %bl mov %rcx, %r15 @@ -707,6 +709,7 @@ ENTRY(common_interrupt) mov %r15, STACK_CPUINFO_FIELD(xen_cr3)(%r14) mov %bl, STACK_CPUINFO_FIELD(use_pv_cr3)(%r14) jmp ret_from_intr +END_LP(common_interrupt_lp) ENTRY(page_fault) ENDBR64 -- 2.40.0
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |