[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/pv: Fix multiple bugs with SEGBASE_GS_USER_SEL



On 31.08.2020 13:18, Andrew Cooper wrote:
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -1059,16 +1059,52 @@ long do_set_segment_base(unsigned int which, unsigned 
> long base)
>          break;
>  
>      case SEGBASE_GS_USER_SEL:
> -        __asm__ __volatile__ (
> -            "     swapgs              \n"
> -            "1:   movl %k0,%%gs       \n"
> -            "    "safe_swapgs"        \n"
> -            ".section .fixup,\"ax\"   \n"
> -            "2:   xorl %k0,%k0        \n"
> -            "     jmp  1b             \n"
> -            ".previous                \n"
> -            _ASM_EXTABLE(1b, 2b)
> -            : "+r" (base) );
> +        /*
> +         * We wish to update the user %gs from the GDT/LDT.  Currently, the
> +         * guest kernel's GS_BASE is in context.
> +         */
> +        asm volatile ( "swapgs" );
> +
> +        if ( base <= 3 )

Either !(base & 0xfffc) or you want to truncate the input to
uint16_t first. The upper 48 bits have always been ignored. With
this addressed one way or another
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Yet two more minor comments:

> +        {
> +            /* Work around NUL segment behaviour on AMD hardware. */
> +            if ( boot_cpu_data.x86_vendor &
> +                 (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
> +                asm volatile ( "mov %[sel], %%gs"
> +                               :: [sel] "rm" (FLAT_USER_DS32) );
> +        }
> +        else
> +            /* Fix up RPL. */
> +            base |= 3;

For a fair part of this block you could save a level of indentation
by inverting the initial condition and using "else if" later on.

> +        /*
> +         * Load the chosen selector, with fault handling.
> +         *
> +         * Errors ought to fail the hypercall, but that was never built in
> +         * originally, and Linux will BUG() if this call fails.
> +         *
> +         * NUL the selector in the case of an error.  This too needs to deal
> +         * with the AMD NUL segment behaviour, but it is already a slowpath 
> in
> +         * #GP context so perform the flat load unconditionally to avoid
> +         * complicated logic.
> +         *
> +         * Anyone wanting to check for errors from this hypercall should
> +         * re-read %gs and compare against the input 'base' selector.
> +         */
> +        asm volatile ( "1: mov %k[sel], %%gs\n\t"
> +                       ".section .fixup, \"ax\", @progbits\n\t"
> +                       "2: mov %k[flat], %%gs\n\t"
> +                       "   xor %k[sel], %k[sel]\n\t"
> +                       "   jmp 1b\n\t"
> +                       ".previous\n\t"
> +                       _ASM_EXTABLE(1b, 2b)
> +                       : [sel] "+r" (base)
> +                       : [flat] "rm" (FLAT_USER_DS32) );

"m" is pointless to specify here - the compiler won't instantiate a
memory variable when the value is an immediate. This can be observed
best when you specify _just_ "m" here.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.