[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-4.1-testing] svm: implement instruction fetch part of DecodeAssist (on #PF/#NPF)



# HG changeset patch
# User Andre Przywara <andre.przywara@xxxxxxx>
# Date 1334217962 -3600
# Node ID 0aa6bc8f38a9a270f910f37e664ea6fcbece0073
# Parent  80130491806f42cfe9c8b93b755c3852ae55733d
svm: implement instruction fetch part of DecodeAssist (on #PF/#NPF)

Newer SVM implementations (Bulldozer) copy up to 15 bytes from the
instruction stream into the VMCB when a #PF or #NPF exception is
intercepted. This patch makes use of this information if available.
This saves us from a) traversing the guest's page tables, b) mapping
the guest's memory and c) copy the instructions from there into the
hypervisor's address space.
This speeds up #NPF intercepts quite a lot and avoids cache and TLB
trashing.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
Signed-off-by: Keir Fraser <keir@xxxxxxx>
xen-unstable changeset:   23238:60f5df2afcbb
xen-unstable date:        Mon Apr 18 13:36:10 2011 +0100

svm: decode-assists feature must depend on nextrip feature.

...since the decode-assist fast paths assume nextrip vmcb field is
valid.

Signed-off-by: Keir Fraser <keir@xxxxxxx>
xen-unstable changeset:   23237:381ab77db71a
xen-unstable date:        Mon Apr 18 10:10:02 2011 +0100

svm: implement INVLPG part of DecodeAssist

Newer SVM implementations (Bulldozer) give the desired address on
a INVLPG intercept explicitly in the EXITINFO1 field of the VMCB.
Use this address to avoid a costly instruction fetch and decode
cycle.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
xen-unstable changeset:   23236:e324c4d1dd6e
xen-unstable date:        Mon Apr 18 10:06:37 2011 +0100

svm: implement CR access part of DecodeAssist

Newer SVM implementations (Bulldozer) now give the used general
purpose register on a MOV-CR intercept explictly. This avoids
fetching and decoding the instruction from guest's memory and speeds
up some Windows guest, which exercise CR8 quite often.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
Signed-off-by: Keir Fraser <keir@xxxxxxx>
xen-unstable changeset:   23235:2c8ad607ece1
xen-unstable date:        Mon Apr 18 10:01:06 2011 +0100

svm: add bit definitions for SVM DecodeAssist

Chapter 15.33 of recent APM Vol.2 manuals describe some additions
to SVM called DecodeAssist. Add the newly added fields to the VMCB
structure and name the associated CPUID bit.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
xen-unstable changeset:   23234:bf7afd48339a
xen-unstable date:        Mon Apr 18 09:49:13 2011 +0100

vmx/hvm: move mov-cr handling functions to generic HVM code

Currently the handling of CR accesses intercepts is done much
differently in SVM and VMX. For future usage move the VMX part
into the generic HVM path and use the exported functions.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
Signed-off-by: Keir Fraser <keir@xxxxxxx>
xen-unstable changeset:   23233:1276926e3795
xen-unstable date:        Mon Apr 18 09:47:12 2011 +0100
---


diff -r 80130491806f -r 0aa6bc8f38a9 xen/arch/x86/hvm/emulate.c
--- a/xen/arch/x86/hvm/emulate.c        Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/arch/x86/hvm/emulate.c        Thu Apr 12 09:06:02 2012 +0100
@@ -996,6 +996,8 @@ int hvm_emulate_one(
 
     hvmemul_ctxt->insn_buf_eip = regs->eip;
     hvmemul_ctxt->insn_buf_bytes =
+        hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf)
+        ? :
         (hvm_virtual_to_linear_addr(
             x86_seg_cs, &hvmemul_ctxt->seg_reg[x86_seg_cs],
             regs->eip, sizeof(hvmemul_ctxt->insn_buf),
diff -r 80130491806f -r 0aa6bc8f38a9 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c    Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/arch/x86/hvm/hvm.c    Thu Apr 12 09:06:02 2012 +0100
@@ -1306,6 +1306,86 @@ static void hvm_set_uc_mode(struct vcpu 
         return hvm_funcs.set_uc_mode(v);
 }
 
+int hvm_mov_to_cr(unsigned int cr, unsigned int gpr)
+{
+    struct vcpu *curr = current;
+    unsigned long val, *reg;
+
+    if ( (reg = get_x86_gpr(guest_cpu_user_regs(), gpr)) == NULL )
+    {
+        gdprintk(XENLOG_ERR, "invalid gpr: %u\n", gpr);
+        goto exit_and_crash;
+    }
+
+    val = *reg;
+    HVMTRACE_LONG_2D(CR_WRITE, cr, TRC_PAR_LONG(val));
+    HVM_DBG_LOG(DBG_LEVEL_1, "CR%u, value = %lx", cr, val);
+
+    switch ( cr )
+    {
+    case 0:
+        return hvm_set_cr0(val);
+
+    case 3:
+        return hvm_set_cr3(val);
+
+    case 4:
+        return hvm_set_cr4(val);
+
+    case 8:
+        vlapic_set_reg(vcpu_vlapic(curr), APIC_TASKPRI, ((val & 0x0f) << 4));
+        break;
+
+    default:
+        gdprintk(XENLOG_ERR, "invalid cr: %d\n", cr);
+        goto exit_and_crash;
+    }
+
+    return X86EMUL_OKAY;
+
+ exit_and_crash:
+    domain_crash(curr->domain);
+    return X86EMUL_UNHANDLEABLE;
+}
+
+int hvm_mov_from_cr(unsigned int cr, unsigned int gpr)
+{
+    struct vcpu *curr = current;
+    unsigned long val = 0, *reg;
+
+    if ( (reg = get_x86_gpr(guest_cpu_user_regs(), gpr)) == NULL )
+    {
+        gdprintk(XENLOG_ERR, "invalid gpr: %u\n", gpr);
+        goto exit_and_crash;
+    }
+
+    switch ( cr )
+    {
+    case 0:
+    case 2:
+    case 3:
+    case 4:
+        val = curr->arch.hvm_vcpu.guest_cr[cr];
+        break;
+    case 8:
+        val = (vlapic_get_reg(vcpu_vlapic(curr), APIC_TASKPRI) & 0xf0) >> 4;
+        break;
+    default:
+        gdprintk(XENLOG_ERR, "invalid cr: %u\n", cr);
+        goto exit_and_crash;
+    }
+
+    *reg = val;
+    HVMTRACE_LONG_2D(CR_READ, cr, TRC_PAR_LONG(val));
+    HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR%u, value = %lx", cr, val);
+
+    return X86EMUL_OKAY;
+
+ exit_and_crash:
+    domain_crash(curr->domain);
+    return X86EMUL_UNHANDLEABLE;
+}
+
 int hvm_set_cr0(unsigned long value)
 {
     struct vcpu *v = current;
diff -r 80130491806f -r 0aa6bc8f38a9 xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c        Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/arch/x86/hvm/svm/svm.c        Thu Apr 12 09:06:02 2012 +0100
@@ -603,6 +603,21 @@ static void svm_set_rdtsc_exiting(struct
     vmcb_set_general1_intercepts(vmcb, general1_intercepts);
 }
 
+static unsigned int svm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
+{
+    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    unsigned int len = v->arch.hvm_svm.cached_insn_len;
+
+    if ( len != 0 )
+    {
+        /* Latch and clear the cached instruction. */
+        memcpy(buf, vmcb->guest_ins, 15);
+        v->arch.hvm_svm.cached_insn_len = 0;
+    }
+
+    return len;
+}
+
 static void svm_init_hypercall_page(struct domain *d, void *hypercall_page)
 {
     char *p;
@@ -928,11 +943,16 @@ struct hvm_function_table * __init start
 
     printk("SVM: Supported advanced features:\n");
 
+    /* DecodeAssists fast paths assume nextrip is valid for fast rIP update. */
+    if ( !cpu_has_svm_nrips )
+        clear_bit(SVM_FEATURE_DECODEASSISTS, &svm_feature_flags);
+
 #define P(p,s) if ( p ) { printk(" - %s\n", s); printed = 1; }
     P(cpu_has_svm_npt, "Nested Page Tables (NPT)");
     P(cpu_has_svm_lbrv, "Last Branch Record (LBR) Virtualisation");
     P(cpu_has_svm_nrips, "Next-RIP Saved on #VMEXIT");
     P(cpu_has_svm_cleanbits, "VMCB Clean Bits");
+    P(cpu_has_svm_decode, "DecodeAssists");
     P(cpu_has_pause_filter, "Pause-Intercept Filter");
 #undef P
 
@@ -1034,6 +1054,22 @@ static void svm_vmexit_do_cpuid(struct c
     __update_guest_eip(regs, inst_len);
 }
 
+static void svm_vmexit_do_cr_access(
+    struct vmcb_struct *vmcb, struct cpu_user_regs *regs)
+{
+    int gp, cr, dir, rc;
+
+    cr = vmcb->exitcode - VMEXIT_CR0_READ;
+    dir = (cr > 15);
+    cr &= 0xf;
+    gp = vmcb->exitinfo1 & 0xf;
+
+    rc = dir ? hvm_mov_to_cr(cr, gp) : hvm_mov_from_cr(cr, gp);
+
+    if ( rc == X86EMUL_OKAY )
+        __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
+}
+
 static void svm_dr_access(struct vcpu *v, struct cpu_user_regs *regs)
 {
     HVMTRACE_0D(DR_WRITE);
@@ -1427,7 +1463,8 @@ static struct hvm_function_table __read_
     .msr_read_intercept   = svm_msr_read_intercept,
     .msr_write_intercept  = svm_msr_write_intercept,
     .invlpg_intercept     = svm_invlpg_intercept,
-    .set_rdtsc_exiting    = svm_set_rdtsc_exiting
+    .set_rdtsc_exiting    = svm_set_rdtsc_exiting,
+    .get_insn_bytes       = svm_get_insn_bytes,
 };
 
 asmlinkage void svm_vmexit_handler(struct cpu_user_regs *regs)
@@ -1533,7 +1570,12 @@ asmlinkage void svm_vmexit_handler(struc
                     (unsigned long)regs->ecx, (unsigned long)regs->edx,
                     (unsigned long)regs->esi, (unsigned long)regs->edi);
 
-        if ( paging_fault(va, regs) )
+        if ( cpu_has_svm_decode )
+            v->arch.hvm_svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
+        rc = paging_fault(va, regs);
+        v->arch.hvm_svm.cached_insn_len = 0;
+
+        if ( rc )
         {
             if ( trace_will_trace_event(TRC_SHADOW) )
                 break;
@@ -1615,12 +1657,29 @@ asmlinkage void svm_vmexit_handler(struc
             int dir = (vmcb->exitinfo1 & 1) ? IOREQ_READ : IOREQ_WRITE;
             if ( handle_pio(port, bytes, dir) )
                 __update_guest_eip(regs, vmcb->exitinfo2 - vmcb->rip);
-            break;
         }
-        /* fallthrough to emulation if a string instruction */
+        else if ( !handle_mmio() )
+            hvm_inject_exception(TRAP_gp_fault, 0, 0);
+        break;
+
     case VMEXIT_CR0_READ ... VMEXIT_CR15_READ:
     case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
+        if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
+            svm_vmexit_do_cr_access(vmcb, regs);
+        else if ( !handle_mmio() ) 
+            hvm_inject_exception(TRAP_gp_fault, 0, 0);
+        break;
+
     case VMEXIT_INVLPG:
+        if ( cpu_has_svm_decode )
+        {
+            svm_invlpg_intercept(vmcb->exitinfo1);
+            __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
+        }
+        else if ( !handle_mmio() )
+            hvm_inject_exception(TRAP_gp_fault, 0, 0);
+        break;
+
     case VMEXIT_INVLPGA:
         if ( !handle_mmio() )
             hvm_inject_exception(TRAP_gp_fault, 0, 0);
@@ -1680,7 +1739,10 @@ asmlinkage void svm_vmexit_handler(struc
     case VMEXIT_NPF:
         perfc_incra(svmexits, VMEXIT_NPF_PERFC);
         regs->error_code = vmcb->exitinfo1;
+        if ( cpu_has_svm_decode )
+            v->arch.hvm_svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
         svm_do_nested_pgfault(vmcb->exitinfo2);
+        v->arch.hvm_svm.cached_insn_len = 0;
         break;
 
     case VMEXIT_IRET: {
diff -r 80130491806f -r 0aa6bc8f38a9 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c        Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmx.c        Thu Apr 12 09:06:02 2012 +0100
@@ -1545,182 +1545,42 @@ static void vmx_invlpg_intercept(unsigne
         vpid_sync_vcpu_gva(curr, vaddr);
 }
 
-#define CASE_SET_REG(REG, reg)      \
-    case VMX_CONTROL_REG_ACCESS_GPR_ ## REG: regs->reg = value; break
-#define CASE_GET_REG(REG, reg)      \
-    case VMX_CONTROL_REG_ACCESS_GPR_ ## REG: value = regs->reg; break
+static int vmx_cr_access(unsigned long exit_qualification)
+{
+    struct vcpu *curr = current;
 
-#define CASE_EXTEND_SET_REG         \
-    CASE_EXTEND_REG(S)
-#define CASE_EXTEND_GET_REG         \
-    CASE_EXTEND_REG(G)
-
-#ifdef __i386__
-#define CASE_EXTEND_REG(T)
-#else
-#define CASE_EXTEND_REG(T)          \
-    CASE_ ## T ## ET_REG(R8, r8);   \
-    CASE_ ## T ## ET_REG(R9, r9);   \
-    CASE_ ## T ## ET_REG(R10, r10); \
-    CASE_ ## T ## ET_REG(R11, r11); \
-    CASE_ ## T ## ET_REG(R12, r12); \
-    CASE_ ## T ## ET_REG(R13, r13); \
-    CASE_ ## T ## ET_REG(R14, r14); \
-    CASE_ ## T ## ET_REG(R15, r15)
-#endif
-
-static int mov_to_cr(int gp, int cr, struct cpu_user_regs *regs)
-{
-    unsigned long value;
-    struct vcpu *v = current;
-    struct vlapic *vlapic = vcpu_vlapic(v);
-    int rc = 0;
-    unsigned long old;
-
-    switch ( gp )
+    switch ( VMX_CONTROL_REG_ACCESS_TYPE(exit_qualification) )
     {
-    CASE_GET_REG(EAX, eax);
-    CASE_GET_REG(ECX, ecx);
-    CASE_GET_REG(EDX, edx);
-    CASE_GET_REG(EBX, ebx);
-    CASE_GET_REG(EBP, ebp);
-    CASE_GET_REG(ESI, esi);
-    CASE_GET_REG(EDI, edi);
-    CASE_GET_REG(ESP, esp);
-    CASE_EXTEND_GET_REG;
-    default:
-        gdprintk(XENLOG_ERR, "invalid gp: %d\n", gp);
-        goto exit_and_crash;
+    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR: {
+        unsigned long gp = VMX_CONTROL_REG_ACCESS_GPR(exit_qualification);
+        unsigned long cr = VMX_CONTROL_REG_ACCESS_NUM(exit_qualification);
+        return hvm_mov_to_cr(cr, gp);
     }
-
-    HVMTRACE_LONG_2D(CR_WRITE, cr, TRC_PAR_LONG(value));
-
-    HVM_DBG_LOG(DBG_LEVEL_1, "CR%d, value = %lx", cr, value);
-
-    switch ( cr )
-    {
-    case 0:
-        old = v->arch.hvm_vcpu.guest_cr[0];
-        rc = !hvm_set_cr0(value);
-        if (rc)
-            hvm_memory_event_cr0(value, old);
-        return rc;
-
-    case 3:
-        old = v->arch.hvm_vcpu.guest_cr[3];
-        rc = !hvm_set_cr3(value);
-        if (rc)
-            hvm_memory_event_cr3(value, old);        
-        return rc;
-
-    case 4:
-        old = v->arch.hvm_vcpu.guest_cr[4];
-        rc = !hvm_set_cr4(value);
-        if (rc)
-            hvm_memory_event_cr4(value, old);
-        return rc; 
-
-    case 8:
-        vlapic_set_reg(vlapic, APIC_TASKPRI, ((value & 0x0F) << 4));
-        break;
-
-    default:
-        gdprintk(XENLOG_ERR, "invalid cr: %d\n", cr);
-        goto exit_and_crash;
+    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR: {
+        unsigned long gp = VMX_CONTROL_REG_ACCESS_GPR(exit_qualification);
+        unsigned long cr = VMX_CONTROL_REG_ACCESS_NUM(exit_qualification);
+        return hvm_mov_from_cr(cr, gp);
     }
-
-    return 1;
-
- exit_and_crash:
-    domain_crash(v->domain);
-    return 0;
-}
-
-/*
- * Read from control registers. CR0 and CR4 are read from the shadow.
- */
-static void mov_from_cr(int cr, int gp, struct cpu_user_regs *regs)
-{
-    unsigned long value = 0;
-    struct vcpu *v = current;
-    struct vlapic *vlapic = vcpu_vlapic(v);
-
-    switch ( cr )
-    {
-    case 3:
-        value = (unsigned long)v->arch.hvm_vcpu.guest_cr[3];
-        break;
-    case 8:
-        value = (unsigned long)vlapic_get_reg(vlapic, APIC_TASKPRI);
-        value = (value & 0xF0) >> 4;
-        break;
-    default:
-        gdprintk(XENLOG_ERR, "invalid cr: %d\n", cr);
-        domain_crash(v->domain);
-        break;
-    }
-
-    switch ( gp ) {
-    CASE_SET_REG(EAX, eax);
-    CASE_SET_REG(ECX, ecx);
-    CASE_SET_REG(EDX, edx);
-    CASE_SET_REG(EBX, ebx);
-    CASE_SET_REG(EBP, ebp);
-    CASE_SET_REG(ESI, esi);
-    CASE_SET_REG(EDI, edi);
-    CASE_SET_REG(ESP, esp);
-    CASE_EXTEND_SET_REG;
-    default:
-        printk("invalid gp: %d\n", gp);
-        domain_crash(v->domain);
-        break;
-    }
-
-    HVMTRACE_LONG_2D(CR_READ, cr, TRC_PAR_LONG(value));
-
-    HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR%d, value = %lx", cr, value);
-}
-
-static int vmx_cr_access(unsigned long exit_qualification,
-                         struct cpu_user_regs *regs)
-{
-    unsigned int gp, cr;
-    unsigned long value;
-    struct vcpu *v = current;
-
-    switch ( exit_qualification & VMX_CONTROL_REG_ACCESS_TYPE )
-    {
-    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR:
-        gp = exit_qualification & VMX_CONTROL_REG_ACCESS_GPR;
-        cr = exit_qualification & VMX_CONTROL_REG_ACCESS_NUM;
-        return mov_to_cr(gp, cr, regs);
-    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR:
-        gp = exit_qualification & VMX_CONTROL_REG_ACCESS_GPR;
-        cr = exit_qualification & VMX_CONTROL_REG_ACCESS_NUM;
-        mov_from_cr(cr, gp, regs);
-        break;
-    case VMX_CONTROL_REG_ACCESS_TYPE_CLTS: 
-    {
-        unsigned long old = v->arch.hvm_vcpu.guest_cr[0];
-        v->arch.hvm_vcpu.guest_cr[0] &= ~X86_CR0_TS;
-        vmx_update_guest_cr(v, 0);
-
-        hvm_memory_event_cr0(v->arch.hvm_vcpu.guest_cr[0], old);
-
+    case VMX_CONTROL_REG_ACCESS_TYPE_CLTS: {
+        unsigned long old = curr->arch.hvm_vcpu.guest_cr[0];
+        curr->arch.hvm_vcpu.guest_cr[0] &= ~X86_CR0_TS;
+        vmx_update_guest_cr(curr, 0);
+        hvm_memory_event_cr0(curr->arch.hvm_vcpu.guest_cr[0], old);
         HVMTRACE_0D(CLTS);
         break;
     }
-    case VMX_CONTROL_REG_ACCESS_TYPE_LMSW:
-        value = v->arch.hvm_vcpu.guest_cr[0];
+    case VMX_CONTROL_REG_ACCESS_TYPE_LMSW: {
+        unsigned long value = curr->arch.hvm_vcpu.guest_cr[0];
         /* LMSW can: (1) set bits 0-3; (2) clear bits 1-3. */
         value = (value & ~0xe) | ((exit_qualification >> 16) & 0xf);
         HVMTRACE_LONG_1D(LMSW, value);
-        return !hvm_set_cr0(value);
+        return hvm_set_cr0(value);
+    }
     default:
         BUG();
     }
 
-    return 1;
+    return X86EMUL_OKAY;
 }
 
 static const struct lbr_info {
@@ -2525,7 +2385,7 @@ asmlinkage void vmx_vmexit_handler(struc
     case EXIT_REASON_CR_ACCESS:
     {
         exit_qualification = __vmread(EXIT_QUALIFICATION);
-        if ( vmx_cr_access(exit_qualification, regs) )
+        if ( vmx_cr_access(exit_qualification) == X86EMUL_OKAY )
             update_guest_eip(); /* Safe: MOV Cn, LMSW, CLTS */
         break;
     }
diff -r 80130491806f -r 0aa6bc8f38a9 xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c      Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/arch/x86/traps.c      Thu Apr 12 09:06:02 2012 +0100
@@ -368,6 +368,36 @@ void vcpu_show_execution_state(struct vc
     vcpu_unpause(v);
 }
 
+unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg)
+{
+    void *p;
+
+    switch ( modrm_reg )
+    {
+    case  0: p = &regs->eax; break;
+    case  1: p = &regs->ecx; break;
+    case  2: p = &regs->edx; break;
+    case  3: p = &regs->ebx; break;
+    case  4: p = &regs->esp; break;
+    case  5: p = &regs->ebp; break;
+    case  6: p = &regs->esi; break;
+    case  7: p = &regs->edi; break;
+#if defined(__x86_64__)
+    case  8: p = &regs->r8;  break;
+    case  9: p = &regs->r9;  break;
+    case 10: p = &regs->r10; break;
+    case 11: p = &regs->r11; break;
+    case 12: p = &regs->r12; break;
+    case 13: p = &regs->r13; break;
+    case 14: p = &regs->r14; break;
+    case 15: p = &regs->r15; break;
+#endif
+    default: p = NULL; break;
+    }
+
+    return p;
+}
+
 static char *trapstr(int trapnr)
 {
     static char *strings[] = { 
diff -r 80130491806f -r 0aa6bc8f38a9 xen/include/asm-x86/hvm/hvm.h
--- a/xen/include/asm-x86/hvm/hvm.h     Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/include/asm-x86/hvm/hvm.h     Thu Apr 12 09:06:02 2012 +0100
@@ -132,6 +132,9 @@ struct hvm_function_table {
     int  (*cpu_up)(void);
     void (*cpu_down)(void);
 
+    /* Copy up to 15 bytes from cached instruction bytes at current rIP. */
+    unsigned int (*get_insn_bytes)(struct vcpu *v, uint8_t *buf);
+
     /* Instruction intercepts: non-void return values are X86EMUL codes. */
     void (*cpuid_intercept)(
         unsigned int *eax, unsigned int *ebx,
@@ -328,6 +331,11 @@ static inline void hvm_cpu_down(void)
         hvm_funcs.cpu_down();
 }
 
+static inline unsigned int hvm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
+{
+    return (hvm_funcs.get_insn_bytes ? hvm_funcs.get_insn_bytes(v, buf) : 0);
+}
+
 enum hvm_task_switch_reason { TSW_jmp, TSW_iret, TSW_call_or_int };
 void hvm_task_switch(
     uint16_t tss_sel, enum hvm_task_switch_reason taskswitch_reason,
diff -r 80130491806f -r 0aa6bc8f38a9 xen/include/asm-x86/hvm/support.h
--- a/xen/include/asm-x86/hvm/support.h Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/include/asm-x86/hvm/support.h Thu Apr 12 09:06:02 2012 +0100
@@ -137,5 +137,7 @@ int hvm_set_cr3(unsigned long value);
 int hvm_set_cr4(unsigned long value);
 int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
 int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content);
+int hvm_mov_to_cr(unsigned int cr, unsigned int gpr);
+int hvm_mov_from_cr(unsigned int cr, unsigned int gpr);
 
 #endif /* __ASM_X86_HVM_SUPPORT_H__ */
diff -r 80130491806f -r 0aa6bc8f38a9 xen/include/asm-x86/hvm/svm/svm.h
--- a/xen/include/asm-x86/hvm/svm/svm.h Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/svm.h Thu Apr 12 09:06:02 2012 +0100
@@ -80,6 +80,7 @@ extern u32 svm_feature_flags;
 #define cpu_has_svm_svml      cpu_has_svm_feature(SVM_FEATURE_SVML)
 #define cpu_has_svm_nrips     cpu_has_svm_feature(SVM_FEATURE_NRIPS)
 #define cpu_has_svm_cleanbits cpu_has_svm_feature(SVM_FEATURE_VMCBCLEAN)
+#define cpu_has_svm_decode    cpu_has_svm_feature(SVM_FEATURE_DECODEASSISTS)
 #define cpu_has_pause_filter  cpu_has_svm_feature(SVM_FEATURE_PAUSEFILTER)
 
 #endif /* __ASM_X86_HVM_SVM_H__ */
diff -r 80130491806f -r 0aa6bc8f38a9 xen/include/asm-x86/hvm/svm/vmcb.h
--- a/xen/include/asm-x86/hvm/svm/vmcb.h        Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/vmcb.h        Thu Apr 12 09:06:02 2012 +0100
@@ -432,7 +432,9 @@ struct vmcb_struct {
     vmcbcleanbits_t cleanbits;  /* offset 0xC0 */
     u32 res09;                  /* offset 0xC4 */
     u64 nextrip;                /* offset 0xC8 */
-    u64 res10a[102];            /* offset 0xD0 pad to save area */
+    u8  guest_ins_len;          /* offset 0xD0 */
+    u8  guest_ins[15];          /* offset 0xD1 */
+    u64 res10a[100];            /* offset 0xE0 pad to save area */
 
     svm_segment_register_t es;  /* offset 1024 - cleanbit 8 */
     svm_segment_register_t cs;  /* cleanbit 8 */
@@ -496,6 +498,9 @@ struct arch_svm_struct {
     int    launch_core;
     bool_t vmcb_in_sync;    /* VMCB sync'ed with VMSAVE? */
 
+    /* VMCB has a cached instruction from #PF/#NPF Decode Assist? */
+    uint8_t cached_insn_len; /* Zero if no cached instruction. */
+
     /* Upper four bytes are undefined in the VMCB, therefore we can't
      * use the fields in the VMCB. Write a 64bit value and then read a 64bit
      * value is fine unless there's a VMRUN/VMEXIT in between which clears
diff -r 80130491806f -r 0aa6bc8f38a9 xen/include/asm-x86/hvm/vmx/vmx.h
--- a/xen/include/asm-x86/hvm/vmx/vmx.h Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h Thu Apr 12 09:06:02 2012 +0100
@@ -144,31 +144,15 @@ void vmx_update_cpu_exec_control(struct 
  * Exit Qualifications for MOV for Control Register Access
  */
  /* 3:0 - control register number (CRn) */
-#define VMX_CONTROL_REG_ACCESS_NUM      0xf
+#define VMX_CONTROL_REG_ACCESS_NUM(eq)  ((eq) & 0xf)
  /* 5:4 - access type (CR write, CR read, CLTS, LMSW) */
-#define VMX_CONTROL_REG_ACCESS_TYPE     0x30
+#define VMX_CONTROL_REG_ACCESS_TYPE(eq) (((eq) >> 4) & 0x3)
+# define VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR   0
+# define VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR 1
+# define VMX_CONTROL_REG_ACCESS_TYPE_CLTS        2
+# define VMX_CONTROL_REG_ACCESS_TYPE_LMSW        3
  /* 10:8 - general purpose register operand */
-#define VMX_CONTROL_REG_ACCESS_GPR      0xf00
-#define VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR   (0 << 4)
-#define VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR (1 << 4)
-#define VMX_CONTROL_REG_ACCESS_TYPE_CLTS        (2 << 4)
-#define VMX_CONTROL_REG_ACCESS_TYPE_LMSW        (3 << 4)
-#define VMX_CONTROL_REG_ACCESS_GPR_EAX  (0 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_ECX  (1 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_EDX  (2 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_EBX  (3 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_ESP  (4 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_EBP  (5 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_ESI  (6 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_EDI  (7 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R8   (8 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R9   (9 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R10  (10 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R11  (11 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R12  (12 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R13  (13 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R14  (14 << 8)
-#define VMX_CONTROL_REG_ACCESS_GPR_R15  (15 << 8)
+#define VMX_CONTROL_REG_ACCESS_GPR(eq)  (((eq) >> 8) & 0xf)
 
 /*
  * Access Rights
diff -r 80130491806f -r 0aa6bc8f38a9 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h   Wed Apr 11 19:41:14 2012 +0100
+++ b/xen/include/asm-x86/processor.h   Thu Apr 12 09:06:02 2012 +0100
@@ -593,6 +593,8 @@ int wrmsr_hypervisor_regs(uint32_t idx, 
 int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
+unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_X86_PROCESSOR_H */

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.