|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] x86/svm: add virtual VMLOAD/VMSAVE support
commit 26f9a18485b5daf5215c8032a3049821c374b148
Author: Brian Woods <brian.woods@xxxxxxx>
AuthorDate: Tue Oct 31 17:03:08 2017 -0500
Commit: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CommitDate: Fri Dec 1 19:03:27 2017 +0000
x86/svm: add virtual VMLOAD/VMSAVE support
On AMD family 17h server processors, there is a feature called virtual
VMLOAD/VMSAVE. This allows a nested hypervisor to preform a VMLOAD or
VMSAVE without needing to be intercepted by the host hypervisor.
Virtual VMLOAD/VMSAVE requires the host hypervisor to be in long mode
and nested page tables to be enabled. For more information about it
please see:
AMD64 Architecture Programmerâ??s Manual Volume 2: System Programming
http://support.amd.com/TechDocs/24593.pdf
Section: VMSAVE and VMLOAD Virtualization (Section 15.33.1)
This patch series adds support to check for and enable the virtual
VMLOAD/VMSAVE features if available.
Signed-off-by: Brian Woods <brian.woods@xxxxxxx>
Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
---
xen/arch/x86/hvm/svm/svm.c | 1 +
xen/arch/x86/hvm/svm/svmdebug.c | 2 ++
xen/arch/x86/hvm/svm/vmcb.c | 8 ++++++++
3 files changed, 11 insertions(+)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index c8ffb17..60b1288 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1669,6 +1669,7 @@ const struct hvm_function_table * __init start_svm(void)
P(cpu_has_svm_nrips, "Next-RIP Saved on #VMEXIT");
P(cpu_has_svm_cleanbits, "VMCB Clean Bits");
P(cpu_has_svm_decode, "DecodeAssists");
+ P(cpu_has_svm_vloadsave, "Virtual VMLOAD/VMSAVE");
P(cpu_has_pause_filter, "Pause-Intercept Filter");
P(cpu_has_tsc_ratio, "TSC Rate MSR");
#undef P
diff --git a/xen/arch/x86/hvm/svm/svmdebug.c b/xen/arch/x86/hvm/svm/svmdebug.c
index 89ef2db..091c58f 100644
--- a/xen/arch/x86/hvm/svm/svmdebug.c
+++ b/xen/arch/x86/hvm/svm/svmdebug.c
@@ -55,6 +55,8 @@ void svm_vmcb_dump(const char *from, const struct vmcb_struct
*vmcb)
vmcb->exitinfo1, vmcb->exitinfo2);
printk("np_enable = %#"PRIx64" guest_asid = %#x\n",
vmcb_get_np_enable(vmcb), vmcb_get_guest_asid(vmcb));
+ printk("virtual vmload/vmsave = %d, virt_ext = %#"PRIx64"\n",
+ vmcb->virt_ext.fields.vloadsave_enable, vmcb->virt_ext.bytes);
printk("cpl = %d efer = %#"PRIx64" star = %#"PRIx64" lstar = %#"PRIx64"\n",
vmcb_get_cpl(vmcb), vmcb_get_efer(vmcb), vmcb->star, vmcb->lstar);
printk("CR0 = 0x%016"PRIx64" CR2 = 0x%016"PRIx64"\n",
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 997e759..2e48fdd 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -200,6 +200,14 @@ static int construct_vmcb(struct vcpu *v)
/* PAT is under complete control of SVM when using nested paging. */
svm_disable_intercept_for_msr(v, MSR_IA32_CR_PAT);
+
+ /* Use virtual VMLOAD/VMSAVE if available. */
+ if ( cpu_has_svm_vloadsave )
+ {
+ vmcb->virt_ext.fields.vloadsave_enable = 1;
+ vmcb->_general2_intercepts &= ~GENERAL2_INTERCEPT_VMLOAD;
+ vmcb->_general2_intercepts &= ~GENERAL2_INTERCEPT_VMSAVE;
+ }
}
else
{
--
generated by git-patchbot for /home/xen/git/xen.git#master
_______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |