[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC] x86/HVM: also stuff RSB upon exit to guest
On 27/07/18 15:20, Jan Beulich wrote: > In order to mostly eliminate abuse of what Xen leaves in the RSB by > guest level attackers, fill the RSB with almost-NULL pointers right > before entering guest context. How do you envisage an attacker using what Xen leaves in the RSB? An attacker doesn't have much/any control of the callgraph Xen makes. > > The placement of the initialization code is intentional: If it was put > in e.g. hvm_enable(), we'd have to be more careful wrt. changing the > low L4 entry of the idle page tables (I didn't check whether boot time > low mappings have disappeared by then), and get_random() couldn't be > used either. Furthermore this way, if no HVM guest gets ever started, > no setup would ever occur. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > --- > TBD: In the end I'm not sure the (pseudo-)randomness is worth it. > Placing the stub uniformly at a fixed address would allow to get > rid of the variable, slightly streamlining the call sites. > TBD: Obviously using NULL here has the downside of reads through NULL > not going to fault anymore. This alone is sufficient justification to not use this route. Ideally, we should have no mappings within disp32 of 0. In principle, there are other better addresses which could be used, such as the page immediately below the canonical boundary, or TSEG/HSEG, but... > > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -85,6 +85,10 @@ integer_param("hvm_debug", opt_hvm_debug > > struct hvm_function_table hvm_funcs __read_mostly; > > +extern void do_overwrite_rsb(void); > +extern const char do_overwrite_rsb_end[]; > +void (* __read_mostly hvm_overwrite_rsb)(void) = do_overwrite_rsb; > + > /* > * The I/O permission bitmap is globally shared by all HVM guests except > * the hardware domain which needs a more permissive one. > @@ -583,6 +587,49 @@ int hvm_domain_initialise(struct domain > return -EINVAL; > } > > + if ( boot_cpu_has(X86_FEATURE_SC_RSB_HVM) && > + unlikely((unsigned long)hvm_overwrite_rsb >= PAGE_SIZE) ) > + { > + /* > + * Map an RSB stuffing routine at a random, 16-byte aligned address > + * in the first linear page, to allow filling the RSB with > almost-NULL > + * pointers before entering HVM guest context. This builds on the > + * assumption that no sane OS will place anything there which could > be > + * abused as an exploit gadget. > + */ > + unsigned long addr = (get_random() << 4) & ~PAGE_MASK; > + unsigned int size = do_overwrite_rsb_end - > + (const char *)do_overwrite_rsb; > + struct page_info *pg = alloc_domheap_page(NULL, 0); > + void *ptr; > + > + if ( !pg || > + map_pages_to_xen(0, page_to_mfn(pg), 1, PAGE_HYPERVISOR_RX) ) > + { > + if ( pg ) > + free_domheap_page(pg); > + return -ENOMEM; > + } > + > + /* > + * Avoid NULL itself, so that branches there will hit the all-ones > + * pattern installed below. > + */ > + if ( !addr ) > + addr = 0x10; > + while ( addr + size > PAGE_SIZE ) > + addr -= 0x10; addr = max(0x10, min(addr, PAGE_SIZE - ROUNDUP(size, 0x10))); although I'd agree that the randomisation doesn't help much here. > + > + ptr = __map_domain_page(pg); > + memset(ptr, -1, PAGE_SIZE); > + memcpy(ptr + addr, do_overwrite_rsb, size); > + unmap_domain_page(ptr); > + > + smp_wmb(); What is this barrier for? > + hvm_overwrite_rsb = (void *)addr; > + printk(XENLOG_INFO "RSB stuffing stub at %p\n", hvm_overwrite_rsb); > + } > + > spin_lock_init(&d->arch.hvm_domain.irq_lock); > spin_lock_init(&d->arch.hvm_domain.uc_lock); > spin_lock_init(&d->arch.hvm_domain.write_map.lock); > --- a/xen/include/asm-x86/spec_ctrl_asm.h > +++ b/xen/include/asm-x86/spec_ctrl_asm.h > @@ -249,6 +249,8 @@ > > /* Use when exiting to HVM guest context. */ > #define SPEC_CTRL_EXIT_TO_HVM \ > + mov hvm_overwrite_rsb(%rip), %rcx; \ > + ALTERNATIVE "", "INDIRECT_CALL %rcx", X86_FEATURE_SC_RSB_HVM; \ ... there are two reasons why I didn't do any RSB stuffing along these lines. First, this is racy with NMIs/etc. Secondly, SMM mode does exactly the same to the whole system (outside of Xens control) with a call tree in HSEG/TSEG. If we are running natively, we can work out HSEG/TSEG and in principle make Xen's stuffing plausibly look like the SMM handler. If Xen is running virtualised, then we can't. Overall, given the holes in the available mechanism, and the fact that Xen's current callgraph is actually pretty good (wrt RSB) for current operating systems, I didn't think it was worth doing anything special. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |