[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm: Allow the guest to permit the use of userspace hypercalls



On 13/01/16 12:26, Stefano Stabellini wrote:
> On Wed, 13 Jan 2016, Juergen Gross wrote:
>> On 13/01/16 11:41, Stefano Stabellini wrote:
>>> On Wed, 13 Jan 2016, Juergen Gross wrote:
>>>> On 12/01/16 18:23, Stefano Stabellini wrote:
>>>>> On Tue, 12 Jan 2016, Juergen Gross wrote:
>>>>>> On 12/01/16 18:05, Stefano Stabellini wrote:
>>>>>>> On Tue, 12 Jan 2016, Jan Beulich wrote:
>>>>>>>>>>> On 12.01.16 at 13:07, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
>>>>>>>>> On Mon, 11 Jan 2016, David Vrabel wrote:
>>>>>>>>>> On 11/01/16 17:17, Andrew Cooper wrote:
>>>>>>>>>>> So from one point of view, sufficient justification for this change 
>>>>>>>>>>> is
>>>>>>>>>>> "because the Linux way isn't the only valid way to do this".
>>>>>>>>>>
>>>>>>>>>> "Because we can" isn't a good justification for adding something new.
>>>>>>>>>> Particularly something that is trivially easy to (accidentally) 
>>>>>>>>>> misuse
>>>>>>>>>> and open a big security hole between userspace and kernel.
>>>>>>>>>>
>>>>>>>>>> The vague idea for a userspace netfront that's floating around
>>>>>>>>>> internally is also not a good reason for pushing this feature at 
>>>>>>>>>> this time.
>>>>>>>>>
>>>>>>>>> I agree with David, but I might have another good use case for this.
>>>>>>>>>
>>>>>>>>> Consider the following scenario: we have a Xen HVM guest, with Xen
>>>>>>>>> installed inside of it (nested virtualization). I'll refer to Xen
>>>>>>>>> running on the host as L0 Xen and Xen running inside the VM as L1 Xen.
>>>>>>>>> Similarly we have two dom0 running, the one with access to the 
>>>>>>>>> physical
>>>>>>>>> hardware, L0 Dom0, and the one running inside the VM, L1 Dom0.
>>>>>>>>>
>>>>>>>>> Let's suppose that we want to lay the groundwork for L1 Dom0 to use PV
>>>>>>>>> frontend drivers, netfront and blkfront to speed up execution. In 
>>>>>>>>> order
>>>>>>>>> to do that, the first thing it needs to do is making an hypercall to 
>>>>>>>>> L0
>>>>>>>>> Xen. That's because netfront and blkfront needs to communicate with
>>>>>>>>> netback and blkback in L0 Dom0: event channels and grant tables are 
>>>>>>>>> the
>>>>>>>>> ones provided by L0 Xen.
>>>>>>>>
>>>>>>>> That's again a layering violation (bypassing the L1 hypervisor).
>>>>>>>
>>>>>>> True, but in this scenario it might be necessary for performance
>>>>>>> reasons: otherwise every hypercall would need to bounce off L1 Xen,
>>>>>>> possibly cancelling the benefits of running netfront and blkfront in the
>>>>>>> first place. I don't have numbers though.
>>>>>>
>>>>>> How is this supposed to work? How can dom0 make hypercalls to L1 _or_ L0
>>>>>> hypervisor? How can it select the hypervisor it is talking to?
>>>>>
>>>>> >From L0 Xen point of view, the guest is just a normal PV on HVM guest,
>>>>> it doesn't matter what's inside, so L1 Dom0 is going to make hypercalls
>>>>> to L0 Xen like any other PV on HVM guests: mapping the hypercall page by
>>>>> writing to the right MSR, retrieved via cpuid, then calling into the
>>>>
>>>> But how to specify that cpuid/MSR should target the L0 hypervisor
>>>> instead of L1?
>>>
>>> Keeping in mind that L1 Dom0 is a PV guest from L1 Xen point of view,
>>> but a PV on HVM guest from L0 Xen point of view, it is true that the
>>> cpuid could be an issue because the cpuid would be generated by L0 Xen,
>>> but then would get filtered by L1 Xen. However the MSR should be OK,
>>> assuming that L1 Xen allows access to it: from inside the VM it would
>>> look like a regular machine MSR, it couldn't get confused with anything
>>> causing hypercalls to L1 Xen.
>>
>> L1 Xen wouldn't allow access to it. Otherwise it couldn't ever setup
>> a hypercall page for one of it's guests.
> 
> If they are PV guests, the hypercall page is not mapped via MSRs.
> 
> 
>>>> And even if this would be working, just by mapping
>>>> the correct page the instructions doing the transition to the
>>>> hypervisor would still result in entering the L1 hypervisor, as
>>>> those instructions must be handled by L1 first in order to make
>>>> nested virtualization work.
>>>
>>> This is wrong. The hypercall page populated by L0 Xen would contain
>>> vmcall instructions. When L1 Dom0 calls into the hypercall page, it
>>> would end up making a vmcall, which brings it directly to L0 Xen,
>>> skipping L1 Xen.
>>
>> Sure. And L0 Xen will see that this guest is subject to nested
>> virtualization and is reflecting the vmcall to L1 Xen (see e.g.
>> xen/arch/x86/hvm/svm/nestedsvm.c, nestedsvm_check_intercepts() ).
>> How else would L1 Xen ever get a vmcall of one of it's guests?
> 
> Only if nested_hvm is enabled, I believe. Sorry for not being clearer
> earlier: I am talking about nested virtualization without nested
> vmx/svm, which is the default.

Same applies to the MSR topic above, I guess.

> But you have a good point there: if nested_hvm is enabled, there should
> still be a way for L1 Dom0 to issue an hypercall to L0 Xen, otherwise
> how could L1 Dom0 ever setup netfront and blkfront?

You would have to add a L1 hypercall to do this. And that was Jan's
point, I suppose.

Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.