[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 03/14] Nested Virtualization: data structure



On 17/08/2010 08:47, "Dong, Eddie" <eddie.dong@xxxxxxxxx> wrote:

>> --- a/xen/include/asm-x86/hvm/hvm.h
>> +++ b/xen/include/asm-x86/hvm/hvm.h
>> @@ -52,7 +52,8 @@ enum hvm_intblk {
>>      hvm_intblk_shadow,    /* MOV-SS or STI shadow */
>>      hvm_intblk_rflags_ie, /* RFLAGS.IE == 0 */
>>      hvm_intblk_tpr,       /* LAPIC TPR too high */
>> -    hvm_intblk_nmi_iret   /* NMI blocked until IRET */
>> +    hvm_intblk_nmi_iret,  /* NMI blocked until IRET */
>> +    hvm_intblk_gif,       /* GIF cleared */
>>  };
> 
> Should we move that to svm.h? We can had a wrap sub-structure here if you want
> to accomodate both situation.

Arguably not worth it. I'd call it hvm_intblk_svm_gif or somesuch however.

>>  /* These happen to be the same as the VMX interrupt shadow
>> definitions. */ @@ -180,6 +181,8 @@ int
>>      hvm_girq_dest_2_vcpu_id(struct domai (hvm_paging_enabled(v) &&
>>  ((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PAE)) #define
>>      hvm_nx_enabled(v) \ (!!((v)->arch.hvm_vcpu.guest_efer & EFER_NX))
>> +#define hvm_svm_enabled(v) \
>> +    (!!((v)->arch.hvm_vcpu.guest_efer & EFER_SVME))
>> 
> 
> ditto.

Again arguably not worth it. Although it is easy to move...

>> +struct nestedhvm {
>> +    bool_t nh_gif; /* vcpu's GIF, always true on VMX */
>> +    bool_t nh_guestmode; /* vcpu in guestmode? */
>> +    void *nh_vm; /* VMCB/VMCS */
>> +    size_t nh_vmsize; /* size of VMCB/VMCS */
>> +
>> +    /* guest vm address of 1st level guest, needed for VMEXIT */
>> +    uint64_t nh_vmaddr;
>> +    uint64_t nh_vmmaxaddr; /* Maximum supported address */
>> +    void *nh_hostsave;
>> +
>> +    void *nh_arch; /* SVM/VMX specific data */
>> +    size_t nh_arch_size;
>> +
>> +    /* Cached real MSR permission bitmaps of the nested guest */
>> +    unsigned long *nh_cached_msrpm;
>> +    size_t nh_cached_msrpm_size;
>> +    /* Merged MSR permission bitmap */
>> +    unsigned long *nh_merged_msrpm;
>> +    size_t nh_merged_msrpm_size;
>> +
>> +    /* Cached cr3/hcr3 the guest sets up for the nested guest.
>> +     * Used by Shadow-on-Shadow and Nested-on-Nested. */
>> +    uint64_t nh_vmcb_cr3, nh_vmcb_hcr3;
>> +    uint32_t nh_guest_asid;
>> +    bool_t nh_flushp2m;
>> +    struct p2m_domain *nh_p2m; /* used p2m table for this vcpu */
>> +
>> +    /* Only meaningful when forcevmexit flag is set */
>> +    struct {
>> +        uint64_t exitcode;  /* generic exitcode */
>> +        uint64_t exitinfo1; /* additional information to the
>> exitcode */ +        uint64_t exitinfo2; /* additional information to
>> the exitcode */ +    } nh_forcevmexit;
>> +    union {
>> +        uint32_t bytes;
>> +        struct {
>> +            uint32_t rflagsif : 1;
>> +            uint32_t vintrmask : 1; /* always cleared on VMX */
>> +            uint32_t forcevmexit : 1;
>> +            uint32_t vmentry : 1;   /* true during vmentry/vmexit
>> emulation */ +            uint32_t reserved : 28;
>> +        } fields;
>> +    } nh_hostflags;
>> +
>> +    bool_t nh_hap_enabled;
>> +};
> 
> ditto.
> We can split above structure into common part, and SVM/VMX specific
> sub-structure.

Yes, we should be strict on the layout of this structure. SVM/VMX-specific
stuff goes into a sub-structure in a union. Absolutely. And you would only
go peeking at the SVM sub-structure if hvm_svm_enabled(v)==TRUE. And we'd
have a similar predicate hvm_vmx_enabled(v)==TRUE, presumably. And maybe a
generic hvm_nestedvirt_enabled(v) too.

 -- Keir


>> +
>> +#define VCPU_HVM(v)       ((v)->arch.hvm_vcpu)
>> +#define VCPU_NESTEDHVM(v) (VCPU_HVM((v)).nestedhvm)
>> +
>>  struct hvm_vcpu {
>>      /* Guest control-register and EFER values, just as the guest
>>      sees them. */ unsigned long       guest_cr[5];
>> @@ -86,6 +137,8 @@ struct hvm_vcpu {
>> 
>>      struct tasklet      assert_evtchn_irq_tasklet;
>> 
>> +    struct nestedhvm    nestedhvm;
>> +
>>      struct mtrr_state   mtrr;
>>      u64                 pat_cr;
> 
> Thx, Eddie
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.