[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH 00/13] Nested Virtualization: Overview



Christoph Egger wrote:
> On Friday 03 September 2010 10:12:08 Dong, Eddie wrote:
>> The fundamental argument of whether we should convert vendor
>> specific code into vendor neutral code is not solved yet.
>> I guess I don;t need to review rest of the code due to this :) I
>> strongly suggest we remove those unnecessary wrapper for readibilty,
>> flexibility and performance.
> 
> I am sort of surprised that you raise the same things again that have
> been discussed in the last round. Please correct me if I am wrong:
> This sounds to me that the statements/explanations from Keir, Tim and
> me are not clear 
> to you. Please feel free to ask whatever is unclear / questionable to
> you. 

So you are actually not asking for my review comments, though you explicitly 
called it.

I am not convinced a wrapper for wrapping, which is just make it like C style 
as you said, should be taken just because somebody have called it out.

Freely (at any time) preload/post-store of VMCS fields is very hard because VMX 
only provide access to current VMCS (because it is CPU register), while SVM may 
be able to access all VMCBs given that it is in memory. I can't say it is 
impossible (one can solve it with further limitation/complexity), however 
enforcing those conversion simply put trickies to VMX code to limiting the time 
of pre-load/post-load, and the additional cost of VMCS access.



Another portion of so called common code are actually SVM code only. Here are 
part of them:



+
+static enum nestedhvm_vmexits
+nestedhvm_vmexit_msr(unsigned long *msr_bitmap, uint32_t msr, bool_t write)
+{
+       bool_t enabled;
+       unsigned long *msr_bit = NULL;
+
+       /*
+        * See AMD64 Programmers Manual, Vol 2, Section 15.10
+        * (MSR-Bitmap Address).
+        */
+       if ( msr <= 0x1fff )
+               msr_bit = msr_bitmap + 0x0000 / BYTES_PER_LONG;
+       else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+               msr_bit = msr_bitmap + 0x0800 / BYTES_PER_LONG;
+       else if ( (msr >= 0xc0010000) && (msr <= 0xc0011fff) )
+               msr_bit = msr_bitmap + 0x1000 / BYTES_PER_LONG;



+/* Virtual GIF */
+int
+nestedsvm_vcpu_clgi(struct vcpu *v)
+{
+       if (!nestedhvm_enabled(v->domain)) {
+               hvm_inject_exception(TRAP_invalid_op, 0, 0);
+               return -1;
+       }
+
+       if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
+               return 0;
+
+       /* clear gif flag */
+       vcpu_nestedhvm(v).nh_gif = 0;
+       local_event_delivery_disable(); /* mask events for PV drivers */
+       return 0;
+}
+
+int
+nestedsvm_vcpu_stgi(struct vcpu *v)
+{
+       if (!nestedhvm_enabled(v->domain)) {
+               hvm_inject_exception(TRAP_invalid_op, 0, 0);
+               return -1;
+       }
+
+       /* Always set the GIF to make hvm_interrupt_blocked work. */
+       vcpu_nestedhvm(v).nh_gif = 1;
+
+       if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
+               return 0;
+
+       local_event_delivery_enable(); /* unmask events for PV drivers */
+       return 0;
+}
+


> 
>> BTW, VMX has a policy to access VMCS field only when it is a must.
> 
> SVM, too. AFAICS, the difference is the method *how* to access fields.
> 
> Christoph
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.