[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 0/9] Introduce AMD SVM AVIC



On 09/19/2016 01:52 AM, Suravee Suthikulpanit wrote:
> GITHUB
> ======
> Latest git tree can be found at:
>     http://github.com/ssuthiku/xen.git    xen_avic_part1_v1
>
> OVERVIEW
> ========
> This patch set is the first of the two-part patch series to introduce 
> the new AMD Advance Virtual Interrupt Controller (AVIC) support.
>
> Basically, SVM AVIC hardware virtualizes local APIC registers of each
> vCPU via the virtual APIC (vAPIC) backing page. This allows guest access
> to certain APIC registers without the need to emulate the hardware behavior
> in the hypervisor. More information about AVIC can be found in the
> AMD64 Architecture Programmer’s Manual Volume 2 - System Programming.
>
>   http://support.amd.com/TechDocs/24593.pdf
>
> For SVM AVIC, we extend the existing kvm_amd driver to:
>   * Check CPUID to detect AVIC support in the processor
>   * Program new fields in VMCB to enable AVIC
>   * Introduce new AVIC data structures and add code to manage them
>   * Handle two new AVIC #VMEXITs
>   * Add new interrupt injection code using vAPIC backing page
>     instead of the existing V_IRQ, V_INTR_PRIO, V_INTR_VECTOR,
>     and V_IGN_TPR fields
>
> Currently, this patch series does not enable AVIC by default.
> Users can enable SVM AVIC by specifying Xen parameter svm-avic=1.
>
> Later, in part 2, we will introduce the IOMMU AVIC support, which
> provides speed up for PCI device pass-through use case by allowing
> the IOMMU hardware to inject interrupt directly into the guest via
> the vAPIC backing page.
>
> OVERALL PERFORMANCE
> ===================
> Currently, AVIC is available on the AMD family 15h models 6Xh
> (Carrizo) processors and newer. Here, the Carizzo is used to collect
> performance data shown below.
>
> Generally, SVM AVIC alone (w/o IOMMU AVIC) should provide overall speed up
> for HVM guest since it does not require #vmexit into the hypervisor to
> emulate certain guest accesses to local APIC registers.
>
> It should also improve performance when hypervisor wants to inject
> interrupts into a running vcpu by setting the corresponded IRR
> bit in the vAPIC backing page and trigger AVIC_DOORBELL MSR.
>
> For sending IPI interrupts between running vcpus in Linux guest,
> Xen is default to using event channel.  However, in case of
> non-paravirtualize guest, AVIC can also provide performance
> improvements for sending IPI.
>
> BENCHMARK 1: HACKBENCH
> ======================
>
> For measuring IPI performance used for scheduling workload, I have collected
> some performance number on 2 and 3 CPU running hackbech with the following
> detail:
>
>   hackbench -p -l 100000
>   Running in process mode with 10 groups using 40 file descriptors each (== 
> 400 tasks)
>   Each sender will pass 100000 messages of 100 bytes
>
>                        |  2 vcpus (sec) |  3 vcpus (sec)   
>   --------------------------------------------------------
>     No AVIC w/o evtchn |     299.57     |    337.779
>     No AVIC w/  evtchn |     270.37     |    419.6064 
>        AVIC w/  evtchn |     181.46     |    171.7957
>        AVIC w/o evtchn |     171.81     |    169.0858
>
> Note: In "w/o evtchn" case, the Linux guest is built w/o
>       Xen guest support.

Enlightened Linux tries to avoid using event channels for APIC accesses
if XEN_HVM_CPUID_APIC_ACCESS_VIRT or XEN_HVM_CPUID_X2APIC_VIRT is set.

I didn't notice either of these two bits set in the series. Should they
be (probably the first one)? Or is this something you are planning for
the second part?

-boris


>
> CURRENT UNTESTED USE-CASES
> ===========================
>     - Nested VM
>
> Any feedback and comments are very much appreciated.
>
> Thank you,
> Suravee
>
> Suravee Suthikulpanit (9):
>   x86/HVM: Introduce struct hvm_pi_ops
>   x86/vLAPIC: Declare vlapic_read_aligned() and vlapic_reg_write() as
>     non-static
>   x86/HVM: Call vlapic_destroy after vcpu_destroy
>   x86/SVM: Modify VMCB fields to add AVIC support
>   x86/HVM/SVM: Add AVIC initialization code
>   x86/SVM: Add AVIC vmexit handlers
>   x86/SVM: Add vcpu scheduling support for AVIC
>   x86/SVM: Add interrupt management code via AVIC
>   x86/SVM: Hook up miscellaneous AVIC functions
>
>  xen/arch/x86/hvm/hvm.c             |   4 +-
>  xen/arch/x86/hvm/svm/Makefile      |   1 +
>  xen/arch/x86/hvm/svm/avic.c        | 609 
> +++++++++++++++++++++++++++++++++++++
>  xen/arch/x86/hvm/svm/intr.c        |   4 +
>  xen/arch/x86/hvm/svm/svm.c         |  57 +++-
>  xen/arch/x86/hvm/svm/vmcb.c        |   9 +-
>  xen/arch/x86/hvm/vlapic.c          |   5 +-
>  xen/arch/x86/hvm/vmx/vmx.c         |  32 +-
>  xen/include/asm-x86/hvm/domain.h   |  63 ++++
>  xen/include/asm-x86/hvm/hvm.h      |   4 +-
>  xen/include/asm-x86/hvm/svm/avic.h |  49 +++
>  xen/include/asm-x86/hvm/svm/svm.h  |   2 +
>  xen/include/asm-x86/hvm/svm/vmcb.h |  35 ++-
>  xen/include/asm-x86/hvm/vlapic.h   |   4 +
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  59 ----
>  15 files changed, 843 insertions(+), 94 deletions(-)
>  create mode 100644 xen/arch/x86/hvm/svm/avic.c
>  create mode 100644 xen/include/asm-x86/hvm/svm/avic.h
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.