[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] x86: move syscall trampolines off the stack

>>> On 18.05.15 at 20:39, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 18/05/15 13:46, Jan Beulich wrote:
>> This is needed as stacks are going to become non-executable. Use
>> separate stub pages (shared among suitable CPUs on the same node)
>> instead.
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> Can you please include a description of how you intend the stubs to
> function, and how they are layed out?  Parts of the code look like a
> single page per stub, while other bits look like several stubs per page.

I'm adding this to the already present description:

Stub areas (currently 128 bytes each) are being split into two parts -
a fixed usage one (the syscall ones) and dynamically usable space,
which will be used by subsequent changes to hold dynamically generated
code during instruction eumlation.

While sharing physical pages among certain CPUs on the same node, for
now the virtual mappings get established in distinct pages for each
CPU. This isn't a strict requirement, but simplifies VA space
management for this initial implementation: Sharing VA space would
require additional tracking of which areas are currently in use. If
the VA and/or TLB overhead turned out to be a problem, such extra code
could easily be added.

> (Personally, I would split the stub allocation/mapping/freeing into a
> patch separately to moving the syscall trampolines, as each are
> moderately complicated changes.)

I'm afraid this wouldn't work: The freeing of the stub page depends
on finding the first byte of each stub area being other than 0xCC in
order for the page to not get freed. Yet only the setting up of the
syscall stubs guarantees this (and I'm not really looking forward to
add - however little - code to store a placeholder instead).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.