[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 05/15] Nested Virtualization: core



Keir Fraser wrote:
> On 18/08/2010 09:27, "Dong, Eddie" <eddie.dong@xxxxxxxxx> wrote:
> 
>>> +enum nestedhvm_vmexits
>>> +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs,
>>> +   uint64_t exitcode) +{
>> 
>> I doubt about the necessary of this kind of wrapper.
>> 
>> In single layer virtualization, SVM and VMX have its own handler for
>> each VM exit. Only when certain common function is invoked, the
>> control goes from SVM/VMX to common one, because they have quit many
>> differences and the savings by wrapping that function is really
>> small, however we pay with additional complexity in both SVM and VMX
>> side as well as readability and performance. Further more, it may
>> limit the flexibility to implement something new for both side. 
>> 
>> Back to the nested virtualization. I am not fully convinced we need
>> a common handler for the VM_entry/exit, at least not for now. It is
>> basically same situation with above single layer virtualization.
>> Rather we prefer to jump from SVM/VMX to common code when certain
>> common service is requested. 
>> 
>> Will that be easier?
> 
> I'm sure there ahs to be conversion-and-demux anyway in
> SVM-VMX-specific code. At which point you may as well break out to
> individual common handler functions just where that makes sense, as
> you say. Also I agree this model fits better with what we do in the
> non-nested case. 
> 
Sounds reasonable :)
Moving those 2 generic entries to vendor specific code makes it easier to me in 
reading, rebasing and new chance for vendor specific optimization in future.

After that, we may revise the necessity of rest APIs in patch 4/5/7. 

Thx, Eddie


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.