[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/4] x86: move syscall trampolines off the stack

On 22/05/2015 09:25, Jan Beulich wrote:
>>>> On 21.05.15 at 13:48, <JBeulich@xxxxxxxx> wrote:
>>>>> On 21.05.15 at 13:08, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 21/05/15 11:15, Jan Beulich wrote:
>>>> This is needed as stacks are going to become non-executable. Use
>>>> separate stub pages (shared among suitable CPUs on the same node)
>>>> instead.
>>>> Stub areas (currently 128 bytes each) are being split into two parts -
>>>> a fixed usage one (the syscall ones) and dynamically usable space,
>>>> which will be used by subsequent changes to hold dynamically generated
>>>> code during instruction eumlation.
>>>> While sharing physical pages among certain CPUs on the same node, for
>>>> now the virtual mappings get established in distinct pages for each
>>>> CPU. This isn't a strict requirement, but simplifies VA space
>>>> management for this initial implementation: Sharing VA space would
>>>> require additional tracking of which areas are currently in use. If
>>>> the VA and/or TLB overhead turned out to be a problem, such extra code
>>>> could easily be added.
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>> It might be wise to have a BUILD_BUG_ON() which confirms that
>>> STUBS_PER_PAGE is a power of two, which is a requirement given the way
>>> it is used.
>> Good idea.
> Sadly this can't be a BUILD_BUG_ON(), as STUBS_PER_PAGE isn't a
> compile time constant (due to the use of max()). I made it an
> ASSERT() for the time being.

In some copious free time, I shall see about borrowing some of the Linux
constants infrastructure.

We have quite a few examples of calculations which should be able to be
evaluated by the compiler, but cant given our current setup.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.