[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Enabling #VE for a domain from dom0



On Fri, Feb 24, 2017 at 8:10 AM, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
> On 24/02/17 14:42, Vlad-Ioan TOPAN wrote:
>>> #VE, by design, raises an exception in non-root context, without
>>> breaking out to the hypervisor.
>>>
>>> The vcpu in question needs to set up a suitable #VE handler, so it is
>>> not safe for an external entity to chose when a vcpu should start
>>> receiving #VE's.
>> The problem is that from a security solution standpoint, it isn't
>> feasible in a Windows guest to use libxc to enable #VE. As it is
>> implemented, libxc is required to allow sharing a structure between the
>> guest and the host; the structure only contains the gfn of the #VE page
>> and the domain id/vcpu id, which are useless since it can only be
>> enabled on the current VCPU. Would a patch providing a simpler VMCALL
>> (without sharing structures, only passing the gfn) to enable #VE be
>> acceptable?
>
>  /sigh
>
> The underlying hypercall is HVMOP_altp2m, which is supposed to have a
> stable ABI, as it is guest visible.
>
> However, it has a HVMOP_ALTP2M_INTERFACE_VERSION wedged in there, which
> is unacceptable, and broken, as it cannot be used correctly from within
> a guest.
>
> The only option we have to is freeze HVMOP_ALTP2M_INTERFACE_VERSION at
> its current value and force it to never change.  I am sorry for not
> having picked up on this point during review of the series several
> releases ago.

I'm just curious, why is it broken exactly?

>
> However, for your purposes, you don't need libxc.  You should just be
> able to make HVMOP hypercalls directly to set up #VE from within the guest.

Right, I've been meaning to submit a patch that removes that libxc
function for a while now as it is misleading..

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.