[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] x86/svm: Use the virtual NMI when available


  • To: Abdelkareem Abdelsaamad <abdelkareem.abdelsaamad@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 13 Feb 2026 23:17:35 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iVHRu33hvp65dvsTFs5JCOYQ5bOsSNFS2e8T1SyguCQ=; b=dcJyscMtQfJcdVxv2e2/R2hDepcNEBV8j12QP42pwJ003B5N/D5aklhS5Actv9r7CurSybRdXFu3b7QUvv5uIznW5ddySIZz18wX6pW2q+/Lxk6W3+6KEVNPgsdy8IIGcEVyb6VcX2p9zNz793+CHbLxXcj/iK8mE2BOdSJr/b5D6/AxofMuYG40wsdoitHtLMySHJtBhmP8YTsQ//3uG+/Tcxwqrrtb5+qgg/Ff/LMV0jQH0Na2OPAeJ8YWpG7fqsLuXVYr4QDAYZnRmL5df4lVgN3FdlLhcP53IVLRcORBSR1FeSUeXLTF2NmpXhy3tyT3Grf5y7EwNs+5TtcZtg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GBcGoe9mGpubySIS0m6x4G8s5bN+pqUe8DSkcg0mp/S+Fy8UkvpFw2H1O1s20UfNZEAZiAu3kq0ZGMGZ+MjxiX9M9nmCG5a/nzrrwCmHx7jkRq+QElFA7rsIxps1htxwv0djURGnUmXtI4PblDXM8+22XxW0Uo5ychHITFaJSUcTwB91bRBwu0428b9Dh0NTtaVPuwExz2hWgXnCFluwqIdD8TwDeLOU3gBM43/43SdLrcCokkvvDU87Jeu80XjkNuLsNVxjFSvXrcrCf2s5B9mWtXW8eMkQ9M4xk+mru+ZINA/eEyNPIinra4LAnXHhcW3O2Oz4r+7dDgwkB9uP7Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, roger.pau@xxxxxxxxxx
  • Delivery-date: Fri, 13 Feb 2026 23:17:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 13/02/2026 10:44 pm, Abdelkareem Abdelsaamad wrote:
> With vNMI, the pending NMI is simply stuffed into the VMCB and handed off
> to the hardware. This means that Xen needs to be able to set a vNMI pending
> on-demand, and also query if a vNMI is pending, e.g. to honor the "at most
> one NMI pending" rule and to preserve all NMIs across save and restore.
>
> Introduce two new hvm_function_table callbacks to support the SVM's vNMI to
> allow the Xen hypervisor to query if a vNMI is pending and to set VMCB's
> _vintr pending flag so the NMIs are serviced by hardware if/when the virtual
> NMIs become unblocked.
>
> Signed-off-by: Abdelkareem Abdelsaamad <abdelkareem.abdelsaamad@xxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/svm/intr.c        | 13 ++++++++++--
>  xen/arch/x86/hvm/svm/svm.c         | 33 ++++++++++++++++++++++++++++--
>  xen/arch/x86/hvm/svm/vmcb.c        |  2 ++
>  xen/arch/x86/include/asm/hvm/hvm.h |  9 ++++++++
>  4 files changed, 53 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/svm/intr.c b/xen/arch/x86/hvm/svm/intr.c
> index 6453a46b85..bc52f8e189 100644
> --- a/xen/arch/x86/hvm/svm/intr.c
> +++ b/xen/arch/x86/hvm/svm/intr.c
> @@ -29,10 +29,19 @@
>  
>  static void svm_inject_nmi(struct vcpu *v)
>  {
> -    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
> -    u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
> +    struct vmcb_struct *vmcb;
> +    u32 general1_intercepts;
>      intinfo_t event;
>  
> +    if ( hvm_funcs.is_vnmi_enabled(v) )
> +    {
> +        hvm_funcs.set_vnmi_pending(v);
> +        return;
> +    }
> +
> +    vmcb = v->arch.hvm.svm.vmcb;
> +    general1_intercepts = vmcb_get_general1_intercepts(vmcb);
> +

There's no need to defer these assignments.

When the HVM hooks are deleted, the correct logic here is:

    if ( vmcb->_vintr.fields.vnmi_enable )
    {
        vmcb->_vintr.fields.vnmi_pending = true;
        return;
    }

> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 6e380890bd..00e5630025 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -545,7 +571,7 @@ static unsigned cf_check int 
> svm_get_interrupt_shadow(struct vcpu *v)
>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>      unsigned int intr_shadow = 0;
>  
> -    if ( vmcb->int_stat.intr_shadow )
> +    if ( vmcb->int_stat.intr_shadow || svm_is_vnmi_masked(v) )
>          intr_shadow |= HVM_INTR_SHADOW_MOV_SS | HVM_INTR_SHADOW_STI;
>  
>      if ( vmcb_get_general1_intercepts(vmcb) & GENERAL1_INTERCEPT_IRET )

It's only HVM_INTR_SHADOW_NMI which vNMI applies to.

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 18ba837738c6..f5c7ea0b0dbe 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -548,7 +548,9 @@ static unsigned cf_check int 
svm_get_interrupt_shadow(struct vcpu *v)
     if ( vmcb->int_stat.intr_shadow )
         intr_shadow |= HVM_INTR_SHADOW_MOV_SS | HVM_INTR_SHADOW_STI;
 
-    if ( vmcb_get_general1_intercepts(vmcb) & GENERAL1_INTERCEPT_IRET )
+    if ( vmcb->_vintr.fields.vnmi_enable
+         ? vmcb->_vintr.fields.vnmi_blocked
+         : (vmcb_get_general1_intercepts(vmcb) & GENERAL1_INTERCEPT_IRET) )
         intr_shadow |= HVM_INTR_SHADOW_NMI;
 
     return intr_shadow;


~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.