[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: [PATCH RFC] x86/HVM: also stuff RSB upon exit to guest



>>> On 27.07.18 at 16:20, <JBeulich@xxxxxxxx> wrote:
> In order to mostly eliminate abuse of what Xen leaves in the RSB by
> guest level attackers, fill the RSB with almost-NULL pointers right
> before entering guest context.
> 
> The placement of the initialization code is intentional: If it was put
> in e.g. hvm_enable(), we'd have to be more careful wrt. changing the
> low L4 entry of the idle page tables (I didn't check whether boot time
> low mappings have disappeared by then), and get_random() couldn't be
> used either. Furthermore this way, if no HVM guest gets ever started,
> no setup would ever occur.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> TBD: In the end I'm not sure the (pseudo-)randomness is worth it.
>      Placing the stub uniformly at a fixed address would allow to get
>      rid of the variable, slightly streamlining the call sites.
> TBD: Obviously using NULL here has the downside of reads through NULL
>      not going to fault anymore.
> 
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -85,6 +85,10 @@ integer_param("hvm_debug", opt_hvm_debug
>  
>  struct hvm_function_table hvm_funcs __read_mostly;
>  
> +extern void do_overwrite_rsb(void);
> +extern const char do_overwrite_rsb_end[];
> +void (* __read_mostly hvm_overwrite_rsb)(void) = do_overwrite_rsb;
> +
>  /*
>   * The I/O permission bitmap is globally shared by all HVM guests except
>   * the hardware domain which needs a more permissive one.
> @@ -583,6 +587,49 @@ int hvm_domain_initialise(struct domain
>          return -EINVAL;
>      }
>  
> +    if ( boot_cpu_has(X86_FEATURE_SC_RSB_HVM) &&
> +         unlikely((unsigned long)hvm_overwrite_rsb >= PAGE_SIZE) )
> +    {
> +        /*
> +         * Map an RSB stuffing routine at a random, 16-byte aligned address
> +         * in the first linear page, to allow filling the RSB with 
> almost-NULL
> +         * pointers before entering HVM guest context.  This builds on the
> +         * assumption that no sane OS will place anything there which could 
> be
> +         * abused as an exploit gadget.
> +         */
> +        unsigned long addr = (get_random() << 4) & ~PAGE_MASK;
> +        unsigned int size = do_overwrite_rsb_end -
> +                            (const char *)do_overwrite_rsb;
> +        struct page_info *pg = alloc_domheap_page(NULL, 0);
> +        void *ptr;
> +
> +        if ( !pg ||
> +             map_pages_to_xen(0, page_to_mfn(pg), 1, PAGE_HYPERVISOR_RX) )
> +        {
> +            if ( pg )
> +                free_domheap_page(pg);
> +            return -ENOMEM;
> +        }
> +
> +        /*
> +         * Avoid NULL itself, so that branches there will hit the all-ones
> +         * pattern installed below.
> +         */
> +        if ( !addr )
> +            addr = 0x10;
> +        while ( addr + size > PAGE_SIZE )
> +            addr -= 0x10;
> +
> +        ptr = __map_domain_page(pg);
> +        memset(ptr, -1, PAGE_SIZE);
> +        memcpy(ptr + addr, do_overwrite_rsb, size);
> +        unmap_domain_page(ptr);
> +
> +        smp_wmb();
> +        hvm_overwrite_rsb = (void *)addr;
> +        printk(XENLOG_INFO "RSB stuffing stub at %p\n", hvm_overwrite_rsb);
> +    }
> +
>      spin_lock_init(&d->arch.hvm_domain.irq_lock);
>      spin_lock_init(&d->arch.hvm_domain.uc_lock);
>      spin_lock_init(&d->arch.hvm_domain.write_map.lock);
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1661,6 +1661,10 @@ void init_xen_l4_slots(l4_pgentry_t *l4t
>                 (ROOT_PAGETABLE_FIRST_XEN_SLOT + slots -
>                  l4_table_offset(XEN_VIRT_START)) * sizeof(*l4t));
>      }
> +
> +    /* Make sure the RSB stuffing stub is accessible. */
> +    if ( is_hvm_domain(d) )
> +        l4t[0] = idle_pg_table[0];
>  }
>  
>  bool fill_ro_mpt(mfn_t mfn)
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -552,6 +552,13 @@ ENTRY(dom_crash_sync_extable)
>          jmp   asm_domain_crash_synchronous /* Does not return */
>          .popsection
>  
> +#ifdef CONFIG_HVM
> +ENTRY(do_overwrite_rsb)
> +        DO_OVERWRITE_RSB tmp=rdx
> +        ret
> +GLOBAL(do_overwrite_rsb_end)
> +#endif
> +
>          .section .text.entry, "ax", @progbits
>  
>  ENTRY(common_interrupt)
> --- a/xen/include/asm-x86/spec_ctrl_asm.h
> +++ b/xen/include/asm-x86/spec_ctrl_asm.h
> @@ -249,6 +249,8 @@
>  
>  /* Use when exiting to HVM guest context. */
>  #define SPEC_CTRL_EXIT_TO_HVM                                           \
> +    mov hvm_overwrite_rsb(%rip), %rcx;                                  \
> +    ALTERNATIVE "", "INDIRECT_CALL %rcx", X86_FEATURE_SC_RSB_HVM;       \
>      ALTERNATIVE "",                                                     \
>          DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM
>  
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx 
> https://lists.xenproject.org/mailman/listinfo/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.