[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [kernel-hardening] Re: x86: PIE support and option to extend KASLR randomization



On Thu, Aug 17, 2017 at 4:09 AM, Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
> * Thomas Garnier <thgarnie@xxxxxxxxxx> wrote:
>
>> > > -model=small/medium assume you are on the low 32-bit. It generates
>> > > instructions where the virtual addresses have the high 32-bit to be zero.
>> >
>> > How are these assumptions hardcoded by GCC? Most of the instructions 
>> > should be
>> > relocatable straight away, as most call/jump/branch instructions are
>> > RIP-relative.
>>
>> I think PIE is capable to use relative instructions well. mcmodel=large 
>> assumes
>> symbols can be anywhere.
>
> So if the numbers in your changelog and Kconfig text cannot be trusted, 
> there's
> this description of the size impact which I suspect is less susceptible to
> measurement error:
>
> +         The kernel and modules will generate slightly more assembly (1 to 2%
> +         increase on the .text sections). The vmlinux binary will be
> +         significantly smaller due to less relocations.
>
> ... but describing a 1-2% kernel text size increase as "slightly more 
> assembly"
> shows a gratituous disregard to kernel code generation quality! In reality 
> that's
> a huge size increase that in most cases will almost directly transfer to a 
> 1-2%
> slowdown for kernel intense workloads.
>
> Where does that size increase come from, if PIE is capable of using relative
> instructins well? Does it come from the loss of a generic register and the
> resulting increase in register pressure, stack spills, etc.?
>
> So I'm still unhappy about this all, and about the attitude surrounding it.
>
> Thanks,
>
>         Ingo

Is the expectation then to have security functions also decrease size
and operational latency? Seems a bit unrealistic if so.
1-2% performance hit on systems which have become at least several
hundred % faster over recent years is not a significant performance
regression compared to the baseline before.
While nobody is saying that performance and size concerns are
irrelevant, the paradigm of leveraging single digit losses in
performance metrics as a reason to leave security functions out has
made Linux the equivalent of the broad side of the barn in security
terms.
If it has real-world benefit to the security posture of the system, it
should be a configurable option for users to decide if a percentage
point or two of op time is worth the mitigation/improvement provided
in security terms.

Separately, reading this thread i've noticed that people are using
different compiler versions in their efforts which makes any of these
sub 10% deltas moot - building a kernel with GCC4.9 vs 7.1 has more of
an impact, and not having all the same compiler flags available (like
the no PLT thing) flat out creates confusion. Shouldn't all of these
tests be executed on a standardized build config and toolchain?

Thanks,
-Boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.