[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] further post-Meltdown-bad-aid performance thoughts



>>> On 22.01.18 at 16:15, <george.dunlap@xxxxxxxxxx> wrote:
> On 01/22/2018 01:30 PM, Jan Beulich wrote:
>>>>> On 22.01.18 at 13:33, <george.dunlap@xxxxxxxxxx> wrote:
>>> What I'm proposing is something like this:
>>>
>>> * We have a "global" region of Xen memory that is mapped by all
>>> processors.  This will contain everything we consider not sensitive;
>>> including Xen text segments, and most domain and vcpu data.  But it will
>>> *not* map all of host memory, nor have access to sensitive data, such as
>>> vcpu register state.
>>>
>>> * We have per-cpu "local" regions.  In this region we will map,
>>> on-demand, guest memory which is needed to perform current operations.
>>> (We can consider how strictly we need to unmap memory after using it.)
>>> We will also map the current vcpu's registers.
>>>
>>> * On entry to a 64-bit PV guest, we don't change the mapping at all.
>>>
>>> Now, no matter what the speculative attack -- SP1, SP2, or SP3 -- a vcpu
>>> can only access its own RAM and registers.  There's no extra overhead to
>>> context switching into or out of the hypervisor.
>> 
>> And we would open back up the SP3 variant of guest user mode
>> attacking its own kernel by going through the Xen mappings. I
>> can't exclude that variants of SP1 (less likely SP2) allowing indirect
>> guest-user -> guest-kernel attacks couldn't be found.
> 
> How?  Xen doesn't have the guest kernel memory mapped when it's not
> using it.

Oh, so you mean to do away with the direct map altogether?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.