[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen/xen vs xen/kvm nesting with pv drivers



On 06/09/16 14:32, Anthony Wright wrote:
> On 06/09/2016 14:05, Andrew Cooper wrote:
>> On 06/09/16 13:47, Anthony Wright wrote:
>>> I tried to install Xen (4.7.0 with linux 4.7.2 Dom0) on an AWS virtual 
>>> machine and it failed because while AWS uses Xen it requires that you use 
>>> the PVHVM network driver. I then tried to install Xen on a Google Cloud 
>>> virtual machine and despite also requiring you to use PV drivers, that 
>>> succeeded because Google Cloud uses KVM.
>>>
>>> I think this means that if you nest Xen in KVM you can use high performance 
>>> drivers, but if you nest Xen in Xen you have to use slower drivers, which 
>>> seems to be the wrong way around!
>>>
>>> I'd like to be able to install Xen on an AWS virtual machine, and wondered 
>>> what are the challenges to getting the pv drivers working in a nested 
>>> environment. Is this a problem with the Dom0 kernel only expecting there to 
>>> be a single XenStore, or is there also a problem in Xen?
>> Nesting Xen inside Xen and getting high-speed drivers at L1 is a hard
>> problem, which is why noone has tackled it yet.
>>
>> The problems all revolve around L1's dom0.  It can't issue hypercalls to
>> L0, meaning that it cant find or connect the xenstore ring.  Even if it
>> could, there is the problem of multiple xenstores, which doesn't fit in
>> the current architecture.
>>
>> It would be lovely if someone would work on this, but it is a very large
>> swamp.
>>
>> ~Andrew
> Does the L1's Dom0 have to issue the hypercalls directly? Would it be
> possible to get the L1's Dom0 to issue the request to the L1 hypervisor
> and that to call the L0 hypervisor? This would seem to fit the current
> architecture fairly closely. (Sorry if I've got the terminology wrong)

In principle, L1 Xen could proxy hypercalls from L1 dom0 to L0 Xen.

However, event channels and grant maps affect the full L1 guest physical
space, and need to be managed by the L1 Xen, not the L1 dom0.  At that
point, you are talking about proxying the event/grant interface, but as
Xen tries to specifically stay out of the way of disk/network in dom0,
it isn't obvious where the split lives.

>
> Regarding multiple XenStores, I appreciate there would be significant
> problems, but you'd only have a maximum of two XenStores, one for the
> xenback drivers (the current XenStore) and one for the xenfront drivers
> (that talks to the parent hypervisor).

Until this point, event channels, grant maps and xenstore have been
global per-domain with no concept of separate namespaces.  As a result,
changing the existing code to work in a nested way will be very invasive.


Fundamentally, the problem is that Xen's virtual architecture does not
nest cleanly.  This is easy to identify in hindsight, but about 15 years
too late to act upon.  I don't have any good suggestions, short of
something radical like using virtio.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.