[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v3.1 1/2] xsplice: rfc.v3.1

Thanks for the work on this. I had some comments and questions on the
latest draft.

On Mon, Jul 27, 2015, Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx> wrote:
> +#define XSPLICE_HOWTO_FLAG_PC_REL    0x1 /* Is PC relative. */  
> +#define XSPLICE_HOWOT_FLAG_SIGN      0x2 /* Should the new value be treated 
> as signed value. */  


> +struct xsplice_reloc_howto {  
> +    uint32_t    howto; /* XSPLICE_HOWTO_* */  
> +    uint32_t    flag; /* XSPLICE_HOWTO_FLAG_* */  
> +    uint32_t    size; /* Size, in bytes, of the item to be relocated. */  
> +    uint32_t    r_shift; /* The value the final relocation is shifted right 
> by; used to drop unwanted data from the relocation. */  
> +    uint64_t    mask; /* Bitmask for which parts of the instruction or data 
> are replaced with the relocated value. */  
> +    uint8_t     pad[8]; /* Must be zero. */  
> +};  

I'm curious how r_shift and mask are used. I'm familiar with x86 and
x86_64 and I'm not sure how these fit in. Is this to support other

> +### Trampoline (e9 opcode)
> +
> +The e9 opcode used for jmpq uses a 32-bit signed displacement. That means
> +we are limited to up to 2GB of virtual address to place the new code
> +from the old code. That should not be a problem since Xen hypervisor has
> +a very small footprint.
> +
> +However if we need - we can always add two trampolines. One at the 2GB
> +limit that calls the next trampoline.

I'm not sure I understand the concern. At least on x86_64, there is a
ton of unused virtual address space where the hypervisor image is
mapped. Why not simply map memory at the end of virtual address space?

There shouldn't be a concern with 2GB jumps then.

> +Please note there is a small limitation for trampolines in
> +function entries: The target function (+ trailing padding) must be able
> +to accomodate the trampoline. On x86 with +-2 GB relative jumps,
> +this means 5 bytes are  required.
> +
> +Depending on compiler settings, there are several functions in Xen that
> +are smaller (without inter-function padding).
> +
> +<pre> 
> +readelf -sW xen-syms | grep " FUNC " | \
> +    awk '{ if ($3 < 5) print $3, $4, $5, $8 }'
> +
> +...
> +3 FUNC LOCAL wbinvd_ipi
> +3 FUNC LOCAL shadow_l1_index
> +...
> +</pre>
> +A compile-time check for, e.g., a minimum alignment of functions or a
> +runtime check that verifies symbol size (+ padding to next symbols) for
> +that in the hypervisor is advised.

Is this really necessary? The way Xen is currently compiled results in
functions being aligned at 16-byte boundaries. The extra space is padded
with NOPs. Even if a function is only 3 bytes, it still has at least 16
bytes of space to use.

This can be controlled with the -falign-functions option to gcc.

Also, there are ways to get a 5-byte NOP added before the function.
This is what the Linux kernel does for ftrace which is what the recent
Linux kernel live patching support is built on.

It seems like it would be easier to be explicit during the build process
than do runtime checks to ensure there is enough space.

> +### When to patch
> +
> +During the discussion on the design two candidates bubbled where
> +the call stack for each CPU would be deterministic. This would
> +minimize the chance of the patch not being applied due to safety
> +checks failing.

It would be nice to have the consistency model be more explicit.

Maybe using the terminology from this LKML post?


> +To randezvous all the CPUs an barrier with an maximum timeout (which
> +could be adjusted), combined with forcing all other CPUs through the
> +hypervisor with IPIs, can be utilized to have all the CPUs be lockstep.


> +### Compiling the hypervisor code
> +
> +Hotpatch generation often requires support for compiling the target
> +with -ffunction-sections / -fdata-sections.  Changes would have to
> +be done to the linker scripts to support this.

Is this for correctness reasons?

I understand this would be a good idea to reduce the size of patches,
but I wanted to make sure I'm not missing something.

> +### Symbol names
> +
> +
> +Xen as it is now, has a couple of non-unique symbol names which will
> +make runtime symbol identification hard.  Sometimes, static symbols
> +simply have the same name in C files, sometimes such symbols get
> +included via header files, and some C files are also compiled
> +multiple times and linked under different names (guest_walk.c).

I'm not sure I understand the problem with static symbols. They aren't
visible outside of the .c file, so when performing the linking against
the target xen image, there shouldn't be any conflicts.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.