[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/22] Vixen: A PV-in-HVM shim



I sent a v2 out with most of the changes discussed in this thread.
The only things missing are getting rid of hardware_domain and
ECS_RESERVED vs. ECS_PROXY.

Regards,

Anthony Liguori

On Sat, Jan 6, 2018 at 4:05 PM, Anthony Liguori <anthony@xxxxxxxxxxxxx> wrote:
> On Sat, Jan 6, 2018 at 3:50 PM, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
> wrote:
>> On 06/01/2018 22:54, Anthony Liguori wrote:
>>> From: Anthony Liguori <aliguori@xxxxxxxxxx>
>>>
>>> CVE-2017-5754 is problematic for paravirtualized x86 domUs because it
>>> appears to be very difficult to isolate the hypervisor's page tables
>>> from PV domUs while maintaining ABI compatibility.  Instead of trying
>>> to make a KPTI-like approach work for Xen PV, it seems reasonable to
>>> run a copy of Xen within an HVM (or PVH) domU to provide backwards
>>> compatibility with guests as mentioned in XSA-254 [1].
>>>
>>> This patch series adds a new mode to Xen called Vixen (Virtualized
>>> Xen)
>>
>> It is quite telling that through all of this, I never even considered
>> asking if vixen stood for anything!
>
> Also, topical for the season:
> https://www.youtube.com/watch?v=78c7vDFt6G8&feature=youtu.be&t=7
>
>>> which provides a PV-compatible interface while gaining
>>> CVE-2017-5754 protection for the host provided by hardware
>>> virtualization.  Vixen supports running a single unprivileged PV
>>> domain (a dom1) that is constructed by the dom0 domain builder.
>>>
>>> Please note the Xen page table configuration fundamental to the
>>> current PV ABI makes it impossible for an operating system to mitigate
>>> CVE-2017-5754 through mechanisms like Kernel Page Table Isolation
>>> (KPTI).  In order for an operating system to mitigate CVE-2017-5754 it
>>> must run directly in a HVM or PVH domU.
>>
>> Its a little more complicated than this, but I suppose is worth pointing
>> out.
>>
>> A 64bit PV guest kernel cannot, of its own accord, protect itself
>> against SP3/Meltdown.  This is due to the shared nature/responsibility
>> of pagetables between the PV guest kernel and Xen.
>>
>> What the Vixen/PV-shim plan does is isolate the guest sufficiently that
>> any SP3 attacks can't read data belonging to other guests on the host.
>>
>> An SP3/Meltdown mitigation can only come from having Xen change the way
>> it uses pagetables, and my 44-patch prerequisite series serves to
>> demonstrate that this seems impractical with the existing ABI.
>
> Correct.  You can get close but getting 100% of the way seems unlikely.
>
>>> This series is very similar to the PVH series posted by Wei and we
>>> have been discussing how to merge efforts.  We were hoping to have
>>> more time to work this out.  I am posting this because I'm fairly
>>> confident that this series is complete (all PV instances in EC2 are
>>> using this) and others might find it useful.  I also wanted to have
>>> more of a discussion about the best way to merge and some of the
>>> differences in designs.
>>
>> Some ad hoc thoughts so far:
>>
>> * Upstream, we need to take the PV-Shim side of domid handling.
>> Unilaterally using dom1 is fine for server-virt infrastructure where
>> guests only ever talk to dom0, but isn't fine if you've got domains
>> which are communicating directly (e.g. with libvchan).  This is very
>> minor in the grand scheme of things though.
>
> That's fine.  I think we should try to focus on merging some common
> infrastructure because I don't think 75+ patch series are going to be
> easy to get agreement on.
>
> I'm not a huge fan of passing the domid via CPUID.  That's going to
> be messy over time.  I do, however, like the idea of passing it as a
> command line argument.  I'm happy to add support for that if that's
> agreeable.
>
>> * I do prefer the Vixen side of startup, where we describe rather more
>> clearly what is going on.  I never got around to stea^W borrowing this
>> for PV-shim.
>
> I think no matter what, we should try to get the first few patches merged
> to add basic guest detection and hypercall support.
>
>> * Whatever eventual version gets in upstream, it is important that it
>> HVM and PVH capable for backwards and forwards compatibility.  Again,
>> this doesn't appear to be too complicated to arrange in practice.  For
>> reference, what is the oldest version of Xen you need to target here?
>> (The pre-console-ring observation puts it quite old)
>
> 3.4.x is what we're targetting.  That is indeed old but since since this
> is a security issue, supporting a wide range of environments seems
> like the right thing to do.
>
>> * For PV-shim, we took the approach of making the domU neither
>> privileged nor the hardware domain.  While I expect this throws up a
>> different set of issues, I think it is a cleaner approach overall.
>
> I never got a chance to try this out and see what breaks.
>
> The one argument I'd make against it is that over time, I'd like to add
> privileges to the domU in an attempt to improve performance.  We found
> a lot of weird compatibility issues on older versions of Linux so I didn't
> attempt to do any of this up front but in the long term, I would like to steal
> some of the tricks from Xenner.
>
>> I'm sure there are areas I've missed, but this is hopefully a start.
>
> Thanks Andrew!
>
> Regards,
>
> Anthony Liguori
>
>> ~Andrew
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxxx
>> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.