[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] x86/pv: Compile out compat_gdt in !CONFIG_PV builds



On 20.04.2020 19:08, Andrew Cooper wrote:
> On 20/04/2020 16:47, Jan Beulich wrote:
>> On 20.04.2020 16:39, Andrew Cooper wrote:
>>> On 20/04/2020 15:12, Jan Beulich wrote:
>>>> On 17.04.2020 17:50, Andrew Cooper wrote:
>>>>> There is no need for the Compat GDT if there are no 32bit PV guests.  This
>>>>> saves 4k per online CPU
>>>>>
>>>>> Bloat-o-meter reports the following savings in Xen itself:
>>>>>
>>>>>   add/remove: 0/3 grow/shrink: 1/4 up/down: 7/-4612 (-4605)
>>>>>   Function                                     old     new   delta
>>>>>   cpu_smpboot_free                            1249    1256      +7
>>>>>   per_cpu__compat_gdt_l1e                        8       -      -8
>>>>>   per_cpu__compat_gdt                            8       -      -8
>>>>>   init_idt_traps                               442     420     -22
>>>>>   load_system_tables                           414     364     -50
>>>>>   trap_init                                    444     280    -164
>>>>>   cpu_smpboot_callback                        1255     991    -264
>>>>>   boot_compat_gdt                             4096       -   -4096
>>>>>   Total: Before=3062726, After=3058121, chg -0.15%
>>>>>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>>> ---
>>>>> CC: Jan Beulich <JBeulich@xxxxxxxx>
>>>>> CC: Wei Liu <wl@xxxxxxx>
>>>>> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>>>>
>>>>> The increase in cpu_smpboot_free() appears to be a consequence of a 
>>>>> totally
>>>>> different layout of basic blocks.
>>>>> ---
>>>>>  xen/arch/x86/cpu/common.c |  5 +++--
>>>>>  xen/arch/x86/desc.c       |  2 ++
>>>>>  xen/arch/x86/smpboot.c    |  5 ++++-
>>>>>  xen/arch/x86/traps.c      | 10 +++++++---
>>>>>  4 files changed, 16 insertions(+), 6 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
>>>>> index 1b33f1ed71..7b093cb421 100644
>>>>> --- a/xen/arch/x86/cpu/common.c
>>>>> +++ b/xen/arch/x86/cpu/common.c
>>>>> @@ -752,8 +752,9 @@ void load_system_tables(void)
>>>>>  
>>>>>   _set_tssldt_desc(gdt + TSS_ENTRY, (unsigned long)tss,
>>>>>                    sizeof(*tss) - 1, SYS_DESC_tss_avail);
>>>>> - _set_tssldt_desc(compat_gdt + TSS_ENTRY, (unsigned long)tss,
>>>>> -                  sizeof(*tss) - 1, SYS_DESC_tss_busy);
>>>>> + if ( IS_ENABLED(CONFIG_PV32) )
>>>>> +         _set_tssldt_desc(compat_gdt + TSS_ENTRY, (unsigned long)tss,
>>>>> +                          sizeof(*tss) - 1, SYS_DESC_tss_busy);
>>>> Wouldn't this better be "if ( opt_pv32 )"? Also elsewhere then.
>>> Doing it like this specifically ensures that there is never a case where
>>> things are half configured.
>> But this way you set up something in the GDT that's never going
>> to be used when "pv=no-32". Why leave a TSS accessible that we
>> don't need?
> 
> Defence in depth.
> 
> Having it only partially set up is more likely to fail in a security
> relevant way, than having it fully set up.

Well, I'm not convinced, but anyway
Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

> This particular example is poor.  There is no need to have the TSS in
> either GDT after the `ltr` instruction at boot for 64bit, because we
> don't task switch, but ISTR you requesting that this stayed as-was for
> consistency.  (For 32bit Xen, it was strictly necessary for the #DF task
> switch to work.)

Well, I'm simply of the opinion that what the TR holds should point
to something valid in the currently active GDT. (As an alternative
we could decide to zap TSS descriptors from _both_ GDTs once we're
past the LTR. This wouldn't even conflict with resume from S3, as
we re-write the TSS descriptors immediately ahead of the LTR.)

> However, the other logic, particularly the cached l1e pointing at the
> percpu compat_gdt is more liable to go wrong in interesting ways.

Possibly, yes.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.