[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Project Spectre/Meltdown FAQ



On 07/01/2018 15:00, Marek Marczykowski-Górecki wrote:
> On Fri, Jan 05, 2018 at 07:05:56PM +0000, Andrew Cooper wrote:
>> On 05/01/18 18:16, Rich Persaud wrote:
>>>> On Jan 5, 2018, at 06:35, Lars Kurth <lars.kurth.xen@xxxxxxxxx
>>>> <mailto:lars.kurth.xen@xxxxxxxxx>> wrote:
>>>> Linux’s KPTI series is designed to address SP3 only.  For Xen guests,
>>>> only 64-bit PV guests are affected by SP3. A KPTI-like approach was
>>>> explored initially, but required significant ABI changes.  
> Is some partial KPTI-like approach feasible? Like unmapping memory owned
> by other guests, but keeping Xen areas mapped? This will still allow
> leaking Xen memory, but there are very few secrets there (vCPUs state,
> anything else?), so overall impact will be much lower.

Feasible> certainly not on a short timescale.

One issue which cropped up when discussing this option is that xenheap
allocations use the directmap mappings to function.  vmap regions are
another where we need to maintain permanent mappings to specific guest
frames.

Register state in struct vcpus, or stack frames including GPR content
are probably the most directly-interesting information to read, but
things like the console ring or grant frames might be equally lucrative.

>
>>>> Instead
>>>> we’ve decided to go with an alternate approach, which is less
>>>> disruptive and less complex to implement. The chosen approach runs PV
>>>> guests in a PVH container, which ensures that PV guests continue to
>>>> behave as before, while providing the isolation that protects the
>>>> hypervisor from SP3. This works well for Xen 4.8 to Xen 4.10, which
>>>> is currently our priority.
> There is one drawback of such approach: running PV will now require a
> CPU with VT-x (or equivalent). I think this is a huge problem, ruining
> the most important (or maybe the only, nowadays) advantage of PV versus
> PVH or HVM.

HVM-capable hardware has been around for 12 years now, which means that
for a lot of people, this solution is a whole lot better than nothing.

I'm not suggesting that we give up on PV guests, but see the cover
letter for "x86: Prerequisite work for a Xen KAISER solution" which
discusses some of the challenges which need to be overcome.

>>> Since PVH does not yet support PCI passthrough, are there other
>>> recommended SP3 mitigations for 64-bit PV driver domains?
>> Lock them down?  Device driver domains, even if not fully trusted, are
>> going to be part of the system and therefore at least semi-TCB.
>>
>> If an attacker can't run code in your driver domain (and be aware of
>> things like server side processing, JIT of SQL, etc as "running code"
>> methods), they aren't in a position to mount an SP3 attack.
> Well, the main reason why driver domains are used in Qubes OS is
> assumption that it is not possible to really "lock them down", given
> full OS (Linux) running inside and being exposed to the outside world
> (having network adapters, USB controllers etc). There are so many
> components running them, that for sure some of them are buggy. Just some
> examples exploitable in the near past: DHCP client, Bluetooth stack.
>
> If we'd believe that handling those devices exposed to the outside world
> is "safe", we wouldn't use driver domains at all...

Indeed, but they are in a better position than arbitrary VMs, because
users can't just log into them and start running code.  (I really hope...)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.