[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 02/13] vpci: use per-domain PCI lock to protect vpci structure


  • To: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 26 Jul 2023 11:35:03 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KCLsC0ku5B5hltACDy8oGZ1tmwawrdHwzTyCTqtUdq0=; b=JIWPR+47mjVlYnV6l+yVIXgGpO5xOUHSGtQ0RXEPS1vkJMzc1gSaUMfNeaMUki9hjyvCZLYs/UlhLM1zxknoNs0Tblg5D0+iaMsiDpwqbZMsxwzdOwwuU5TNXCBF1macJyNd++xz027Q4nroKC+WT3jrFUvX+8irxf5ogfgQFlOMR+qlw9XqN1Ty0hKLNXMpIAVisRt1uGva1LSS0Lp9V+6kdy2CUiN034VBMSXDSSLwPnF1vV6e72e6FGs3UcR8Qv3wq0pavrALwpmNro4RncV1kb1IwtHFkZGLXIopBJw/J6+kGgMiyI4c8IQpPxki86TmdAbl4mFuHX/gCU91Cw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aXUWoi6tbJOaNlTr9y4/NUwy1AwRzefJWA5kTuP+QybgYbd8vRfngoTYoPP0VPL2POhCPWCsmNnzdsvzHRxtIjQqs55OiD+YgUliiuJqr1riqm5GpdLZaCALUCpj5oi48HbrjakZFYieOlpAMLrmVM88u/xXrlsi6TdKOzIgNHt8n1b8+3UcLQgj3DDLt8fqzufgPLc6RrFYwJeJGPmxlefxLpiTHxGNCan/yJeCp4c+Lqli1VFZ+YFsWAXfZDbNbsaJLl5r0Qu4fJhU3TmoxvFf0iE5dN54M+qqRMZN0leTRzeVMIDt3SB6Hmf3dhtVKJi8XV6NSPj9Iv0xRPzEqg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Wed, 26 Jul 2023 09:35:23 +0000
  • Ironport-data: A9a23:MCDAp6BsLasNVBVW//niw5YqxClBgxIJ4kV8jS/XYbTApDIqg2FWz msWCGyEbP6Na2L2cttxOYSxpBkGv5fXyt9nQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs t7pyyHlEAbNNwVcbCRMsspvlDs15K6p4GxC4wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw3Lp0IEFi5 aUiEy0yakncusiqn46lY7w57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDa7IA9ZidABNPLPfdOHX4NNl1uwr WPa5WXpRBodMbRzzBLcqy3w17+fx3uTtIQ6GqG6ytpvnxqq/EMBEEUvd0CSmsOXhRvrMz5YA wlOksY0loAM80isQsj4TgePineOtR4BWPJdC+Q/rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0 EeTmNHkDiApt6eaIVq68rqXtjq0NTIiBGkOfzIfTQAF7t/gp6k+lhvKCN1kFcadjNf4BDXxy DCitzUlivMYistj6kmg1VXOgjbprJ6ZSAcwv1/TRjj8sV0/Y5O5bYu171Sd9exHMIuSUliGu j4DhtSa6+cNS5qKkURhXdkwIV1g3N7dWBW0vLKlN8BJG+iFk5J7Qb1t3Q==
  • Ironport-hdrordr: A9a23:aBW52qFGJCcr0ng7pLqFiJLXdLJyesId70hD6qkvc3Fom52j/f xGws5x6faVslkssb8b6LW90Y27MAvhHPlOkPIs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79 YbT0EcMqyOMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPH1XgspbnmNE42igYy9LrF4sP+tFKH PQ3LswmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzI39QWUijusybjiVzyVxA0XXT9jyaortT GtqX2y2oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuQFNzn2jQ6sRYJ5H5mPpio8ru2D4Esj1P PMvxAjFcJu7G65RBD/nTLdny3blBo+4X7rzlGVxVPlvMzCXTo/T+5Mn5hQfBf141cp+IgU6t MC40up875sST/QliX04NbFEzlsi0qPuHIn1coelWZWX4cyYKJY6aYf4ERWOpEdGz+S0vFvLM BeSOXnoNpGe1KTaH7U+kFp3dyXR3w2WiyLR0AT0/blpgR+rTRc9Q811cYflnAP+NYWUJ9f/d nJNaxuifVnUtIWRbgVPpZOfeKHTkj2BT7cOmObJlrqUIsdPWjWlpLx6LIpoMm3ZZ0zyocokp ipaiIWiYcLQTOvNSSy5uwJzviUK1/NHwgFi/suq6SRg4eMBYYCaka4ORUTe8jJmYRsPiSUYY f2BHtsOY6SEYLfI/c24+TAYegiFZA/arxghj9pYSP4nuv7bqvXi8f8TNH/YJLQLBdMYBKNPp JEZkm/GPl9
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jul 26, 2023 at 01:17:58AM +0000, Volodymyr Babchuk wrote:
> 
> Hi Roger,
> 
> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
> 
> > On Thu, Jul 20, 2023 at 12:32:31AM +0000, Volodymyr Babchuk wrote:
> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >> @@ -498,6 +537,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, 
> >> unsigned int size,
> >>          ASSERT(data_offset < size);
> >>      }
> >>      spin_unlock(&pdev->vpci->lock);
> >> +    unlock_locks(d);
> >
> > There's one issue here, some handlers will cal pcidevs_lock(), which
> > will result in a lock over inversion, as in the previous patch we
> > agreed that the locking order was pcidevs_lock first, d->pci_lock
> > after.
> >
> > For example the MSI control_write() handler will call
> > vpci_msi_arch_enable() which takes the pcidevs lock.  I think I will
> > have to look into using a dedicated lock for MSI related handling, as
> > that's the only place where I think we have this pattern of taking the
> > pcidevs_lock after the d->pci_lock.
> 
> I'll mention this in the commit message. Is there something else that I
> should do right now?

Well, I don't think we want to commit this as-is with a known lock
inversion.

The functions that require the pcidevs lock are:

pt_irq_{create,destroy}_bind()
unmap_domain_pirq()

AFAICT those functions require the lock in order to assert that the
underlying device doesn't go away, as they do also use d->event_lock
in order to get exclusive access to the data fields.  Please double
check that I'm not mistaken.

If that's accurate you will have to check the call tree that spawns
from those functions in order to modify the asserts to check for
either the pcidevs or the per-domain pci_list lock being taken.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.