[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ongoing/future speculative mitigation work



On Thu, Oct 18, 2018 at 06:46:22PM +0100, Andrew Cooper wrote:
> Hello,
> 
> This is an accumulation and summary of various tasks which have been
> discussed since the revelation of the speculative security issues in
> January, and also an invitation to discuss alternative ideas.  They are
> x86 specific, but a lot of the principles are architecture-agnostic.
> 
> 1) A secrets-free hypervisor.
> 
> Basically every hypercall can be (ab)used by a guest, and used as an
> arbitrary cache-load gadget.  Logically, this is the first half of a
> Spectre SP1 gadget, and is usually the first stepping stone to
> exploiting one of the speculative sidechannels.
> 
> Short of compiling Xen with LLVM's Speculative Load Hardening (which is
> still experimental, and comes with a ~30% perf hit in the common case),
> this is unavoidable.  Furthermore, throwing a few array_index_nospec()
> into the code isn't a viable solution to the problem.
> 
> An alternative option is to have less data mapped into Xen's virtual
> address space - if a piece of memory isn't mapped, it can't be loaded
> into the cache.
> 
> An easy first step here is to remove Xen's directmap, which will mean
> that guests general RAM isn't mapped by default into Xen's address
> space.  This will come with some performance hit, as the
> map_domain_page() infrastructure will now have to actually
> create/destroy mappings, but removing the directmap will cause an
> improvement for non-speculative security as well (No possibility of
> ret2dir as an exploit technique).

I have looked into making the "separate xenheap domheap with partial
direct map" mode (see common/page_alloc.c) work but found it not as
straight forward as it should've been.

Before I spend more time on this, I would like some opinions on if there
is other approach which might be more useful than that mode.

> 
> Beyond the directmap, there are plenty of other interesting secrets in
> the Xen heap and other mappings, such as the stacks of the other pcpus. 
> Fixing this requires moving Xen to having a non-uniform memory layout,
> and this is much harder to change.  I already experimented with this as
> a meltdown mitigation around about a year ago, and posted the resulting
> series on Jan 4th,
> https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg00274.html,
> some trivial bits of which have already found their way upstream.
> 
> To have a non-uniform memory layout, Xen may not share L4 pagetables. 
> i.e. Xen must never have two pcpus which reference the same pagetable in
> %cr3.
> 
> This property already holds for 32bit PV guests, and all HVM guests, but
> 64bit PV guests are the sticking point.  Because Linux has a flat memory
> layout, when a 64bit PV guest schedules two threads from the same
> process on separate vcpus, those two vcpus have the same virtual %cr3,
> and currently, Xen programs the same real %cr3 into hardware.

Which bit of Linux code are you referring to? If you remember it off the
top of your head, it would save me some time digging around. If not,
never mind, I can look it up myself.

> 
> If we want Xen to have a non-uniform layout, are two options are:
> * Fix Linux to have the same non-uniform layout that Xen wants
> (Backwards compatibility for older 64bit PV guests can be achieved with
> xen-shim).
> * Make use XPTI algorithm (specifically, the pagetable sync/copy part)
> forever more in the future.
> 
> Option 2 isn't great (especially for perf on fixed hardware), but does
> keep all the necessary changes in Xen.  Option 1 looks to be the better
> option longterm.

What is the problem with 1+2 at the same time? I think XPTI can be
enabled / disabled on a per-guest basis?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.