[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/physdev: Call xsm_unmap_domain_irq earlier


  • To: Jason Andryuk <jandryuk@xxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 28 Mar 2022 13:40:45 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S0QByrze7LV9nXdPPsdMRWDlVINwH5A6CZ+WwDS0wWk=; b=bwrTNMCFe7+BeC69gXRNbgppaoXhUY/Dolf1qAD4cwlbeRJRP8x6AKppAWUPffLpR0unpcrrhVp2DlC8u9sh4FzDOQfnadfOgvjS40WFWq5fB48etEVY21wdY2ksxE07QSHO50GO+SWUPn70wcm+7+sAunzX5A1VMoBx80FCYGunUmw5Ic3P+yCewwAPllXfFZR+rdiTdG087bRNTfITC4ZDw6dZ6s1uoPpuEpp5Q/wlQj5yHx0/4WsOWyIL6p6Y03t0cMKbRb87FN453099nJ0oJApHokhmAkiRL0HeU78pKQpHDeXExDKvo2UJQPK8nLKfeD8bjJzZq//+QNiBbg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EW3w7Gw1MBYEAwAP7zjJiRON/D5CJY7Bvv00zUjZUtMqnN+HFFEW7qWRCIqR7fmxE94D0yllg2MItkHdBfb/bzELIWo7NzfdEoAz8oR6rbhnbshMcikclRzKyeu3QnET+rkut+RuV8PRGmUpuMBBzS7WA1D1R8QDKAZ0FocXI8+mZMIPuTbWBC7P8i+r8hw72kckDvBzzMWGaiCf/BoJT805WgzHBFIG26tsQwbW7iicqduuQ2CFY4ynBbsH51mcsq/N7xcaV2EUVEBmfIHhnb7T3ZqFzSy8NTY1dELmxLrWybkZXpB1u/OGpMP2GLfhRTNLozPuhy14u5VzhHpxPg==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, "Daniel P. Smith" <dpsmith.dev@xxxxxxxxx>
  • Delivery-date: Mon, 28 Mar 2022 11:41:27 +0000
  • Ironport-data: A9a23:RTmXHKDyZyaM2RVW/xDjw5YqxClBgxIJ4kV8jS/XYbTApD5w0TwDy 2JJC2uPbPmNYGqkLY8kbtm28xwE6MKEmNZjQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgHWeIdA970Ug5w7Vj0tYx6TSEK1jlV e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhrk /Jc5a3zZj54M6vMgNwcAwZXNgZXaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us 6ZCcXZUM07F17neLLGTE4GAguw5K8bmJsUHs2xIxjDFF/c2B5vERs0m4PcGg2xp1pgVQZ4yY eInawRANwn/eyRSHWcZT5thheuHwVngJmgwRFW9+vNsvjm7IBZK+KfpGMrYfJqNX8o9tkqAv WfH42S/DhwEHNOawDuBtHmrg4fnnyn2RYYTH72Q7eNxjRuYwWl7IB8fU1ehsOS6okG7UtNbb UcT/0IGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbizt1HUBdEGQHMoZ/8pZrG3p6j Tdlgu8FGxR165qEGUPe2Y7FoDWyIRAUfXANQAgtGF5tD8bYnKk/iRfGT9BGGaGzj8HoFTyY/ w1mvBTSlJ1I05dVivzTEUTvxmv1+8OXFlJdChD/BDrN0+9vWGKyi2VEA3D/5O0IEouWR0LpU JMsy5nHt7Bm4X1geUWwrAQx8FOBuq3t3N702wcH83wdG9KFoSTLkWd4um0WGauRGpxYEQIFm WeK0e+r2LddPWGxcYh8aJ+rBsIhwMDITIq5BqyMMYAUPMMvJGdrGR2Cg2bKhQgBd2B2zMkC1 WqzK57wXR7294w5pNZJewvt+eBynX1vrY8ibZv60w6mwdKjiI29Et843K+1Rrlhtsus+VyNm /4Gbpfi40gPAYXWP3iMmaZOfA9iEJTOLc2vwyChXrXYeVQO9aBII6K5/I7NjKQ4xvwMzb2Zp yvVt40x4AOXuEAr4D6iMxhLQLjuQYx+vTQ8OyktNkyvwH8tfcCk66J3Snf9VeBPGDBLpRKsc 8Q4Rg==
  • Ironport-hdrordr: A9a23:JaFNsqHkN05mu9vgpLqFBpHXdLJyesId70hD6qkvc3Jom52j+P xGws526faVslYssHFJo6HnBEClewKgyXcV2/hqAV7GZmjbUQSTXeRfBOfZslnd8mjFh5JgPM RbAtlD4b/LfCBHZK/BiWHSebtQo6jkzEnrv5ak854Ed3AVV0gK1XYBNu/0KDwQeOEQbqBJa6 Z0q/A37waISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oC5BOVhT2lxbbmG1zAty1uGQ9n8PMHyy zoggb57qKsv7WSzQLd7Xba69BzlMH6wtVOKcSQgow+KynqiCyveIN9Mofy9wwdkaWK0hIHgd PMqxAvM4Ba7G7QRHi8pV/X1wzpwF8Vmgjf4G7dpUGmjd3yRTo8BcYEr5leaAHl500pu8w5+L 5X3kqC3qAnQi/orWDY3ZzlRhtqnk27rT4JiugIlUFSVoMYdft4sZEfxkVIC50NdRiKpLzPKN MeTf002cwmMW9zNxvizypSKZ2XLzkO9y69MwY/Upf/6UkVoJh7p3FosPD30E1wsa7VcKM0lN gsAp4Y5I2mcfVmH56VJN1xN/dfWVa9CC4lDgqpUCHa/ec8Sjbwl6I=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Mar 25, 2022 at 10:18:26AM -0400, Jason Andryuk wrote:
> Pull the XSM check up out of unmap_domain_pirq into physdev_map_pirq.
> 
> xsm_unmap_domain_irq was seen denying unmap_domain_pirq when called from
> complete_domain_destroy as an RCU callback.  The source context was an
> unexpected, random domain.  Since this is a xen-internal operation,
> going through the XSM hook is inapproriate.
> 
> Move the XSM hook up into physdev_unmap_pirq, which is the
> guest-accessible path.  This requires moving some of the sanity check
> upwards as well since the hook needs the additional data to make its
> decision.  Since complete_domain_destroy still calls unmap_domain_pirq,
> replace the moved runtime checking with assert.  Only valid pirqs should
> make their way into unmap_domain_pirq from complete_domain_destroy.
> 
> This is mostly code movement, but one style change is to pull `irq =
> info->arch.irq` out of the if condition.
> 
> Label done is now unused and removed.
> 
> Signed-off-by: Jason Andryuk <jandryuk@xxxxxxxxx>
> ---
> unmap_domain_pirq is also called in vioapic_hwdom_map_gsi and
> vpci_msi_disable.  vioapic_hwdom_map_gsi is a cleanup path after going
> through map_domain_pirq, and I don't think the vpci code is directly
> guest-accessible.  So I think those are okay, but I not familiar with
> that code.  Hence, I am highlighting it.

Hm, for vpci_msi_disable it's not technically correct to drop the XSM
check. This is a guests accessible path, however the value of PIRQ is
not settable by the guest, so it's kind of OK just for that reason.

vioapic_hwdom_map_gsi is just for the hardware domain, so likely also
OK.

>  xen/arch/x86/irq.c     | 31 +++++++-----------------------
>  xen/arch/x86/physdev.c | 43 +++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 49 insertions(+), 25 deletions(-)
> 
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 285ac399fb..ddd3194fba 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2310,41 +2310,25 @@ int unmap_domain_pirq(struct domain *d, int pirq)
>      struct pirq *info;
>      struct msi_desc *msi_desc = NULL;
>  
> -    if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
> -        return -EINVAL;
> -
> +    ASSERT(pirq >= 0);
> +    ASSERT(pirq < d->nr_pirqs);
>      ASSERT(pcidevs_locked());
>      ASSERT(spin_is_locked(&d->event_lock));
>  
>      info = pirq_info(d, pirq);
> -    if ( !info || (irq = info->arch.irq) <= 0 )
> -    {
> -        dprintk(XENLOG_G_ERR, "dom%d: pirq %d not mapped\n",
> -                d->domain_id, pirq);
> -        ret = -EINVAL;
> -        goto done;
> -    }
> +    ASSERT(info);
> +
> +    irq = info->arch.irq;
> +    ASSERT(irq > 0);
>  
>      desc = irq_to_desc(irq);
>      msi_desc = desc->msi_desc;
>      if ( msi_desc && msi_desc->msi_attrib.type == PCI_CAP_ID_MSI )
>      {
> -        if ( msi_desc->msi_attrib.entry_nr )
> -        {
> -            printk(XENLOG_G_ERR
> -                   "dom%d: trying to unmap secondary MSI pirq %d\n",
> -                   d->domain_id, pirq);
> -            ret = -EBUSY;
> -            goto done;
> -        }
> +        ASSERT(msi_desc->msi_attrib.entry_nr == 0);
>          nr = msi_desc->msi.nvec;
>      }
>  
> -    ret = xsm_unmap_domain_irq(XSM_HOOK, d, irq,
> -                               msi_desc ? msi_desc->dev : NULL);

Have you considered performing the check only if !d->is_dying?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.