[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 02/13] vpci: use per-domain PCI lock to protect vpci structure


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Fri, 28 Jul 2023 00:21:54 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=o+f4typXSugZLDpj2djbBYPXReNGva54XD63rE6iHb0=; b=KHx8/6BsgpxvenwH1JbUASjAl8cPyRbEuo7V1cBiRIqOmgDH7/EQGJMyHgRUNLocAaq3cQhtMqylSyndbzdEb87dPKe8X8DSw5tk+eEHOqC6tQHVngb9y20+2pWPYakOzj61x7heAf0uLSrJeMLySwmNqRiBGWcB32vKwXFJEmL9HN+cO+WlZekMg9f0vgU7/YwpctGnaStnggrINIXdA4KJlddArqpdp/2u/8xDH2M983w67AyzL/GuFtOdHTopGeVU31obtIKpUhbF4ExzWfAmxPEeeYlbSGkNgaC43tF8lXbVHsGgqR8FfRxNSPGlrQsU2SyLRYMsvauxPbIyDg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=T5h/WE6H6oVcPC5KoAJjrYYvDW1cwHC9dcWdNmARl2NJxqIrsFY0Vjq4ZC5sa5jP+0wXlAWlfHGySmrwBGcPD6dssnw1eumTptp0Svurl5xRCdfRZDHp5hJinxE8PJeGAdFN9eJ51l7paxVj2vxMqPBjfZ0up99zeEij4htaC+F8/GXgc6kJoNO8m2oybzN/KU6h5wyAAWQHF4UixLRlYL3LWTXp1QIZ6MRYorln6HiKfz0HVTRlbsUrhp1NUjfONpGtVfxBEHyWGwzF4G8PcNQVkiIpNDoBPtG2F//PolQvIkjsjSCOqKaJEaTGcsE8xm1BRU31V3AGbYx2zpk+Mw==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Fri, 28 Jul 2023 00:22:20 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZuqGrLMJPz1ipBEu4urExXp8yi6/CgxUAgAWEoICAA8vwgIAA+VsAgADNRYCAALGHgA==
  • Thread-topic: [PATCH v8 02/13] vpci: use per-domain PCI lock to protect vpci structure

Hi Roger,

Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:

> On Thu, Jul 27, 2023 at 12:56:54AM +0000, Volodymyr Babchuk wrote:
>> Hi Roger,
>> 
>> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
>> 
>> > On Wed, Jul 26, 2023 at 01:17:58AM +0000, Volodymyr Babchuk wrote:
>> >> 
>> >> Hi Roger,
>> >> 
>> >> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
>> >> 
>> >> > On Thu, Jul 20, 2023 at 12:32:31AM +0000, Volodymyr Babchuk wrote:
>> >> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>> >> >> @@ -498,6 +537,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, 
>> >> >> unsigned int size,
>> >> >>          ASSERT(data_offset < size);
>> >> >>      }
>> >> >>      spin_unlock(&pdev->vpci->lock);
>> >> >> +    unlock_locks(d);
>> >> >
>> >> > There's one issue here, some handlers will cal pcidevs_lock(), which
>> >> > will result in a lock over inversion, as in the previous patch we
>> >> > agreed that the locking order was pcidevs_lock first, d->pci_lock
>> >> > after.
>> >> >
>> >> > For example the MSI control_write() handler will call
>> >> > vpci_msi_arch_enable() which takes the pcidevs lock.  I think I will
>> >> > have to look into using a dedicated lock for MSI related handling, as
>> >> > that's the only place where I think we have this pattern of taking the
>> >> > pcidevs_lock after the d->pci_lock.
>> >> 
>> >> I'll mention this in the commit message. Is there something else that I
>> >> should do right now?
>> >
>> > Well, I don't think we want to commit this as-is with a known lock
>> > inversion.
>> >
>> > The functions that require the pcidevs lock are:
>> >
>> > pt_irq_{create,destroy}_bind()
>> > unmap_domain_pirq()
>> >
>> > AFAICT those functions require the lock in order to assert that the
>> > underlying device doesn't go away, as they do also use d->event_lock
>> > in order to get exclusive access to the data fields.  Please double
>> > check that I'm not mistaken.
>> 
>> You are right, all three function does not access any of PCI state
>> directly. However...
>> 
>> > If that's accurate you will have to check the call tree that spawns
>> > from those functions in order to modify the asserts to check for
>> > either the pcidevs or the per-domain pci_list lock being taken.
>> 
>> ... I checked calls for PT_IRQ_TYPE_MSI case, there is only call that
>> bothers me: hvm_pi_update_irte(), which calls IO-MMU code via
>> vmx_pi_update_irte():
>> 
>> amd_iommu_msi_msg_update_ire() or msi_msg_write_remap_rte().
>
> That path is only for VT-d, so strictly speaking you only need to worry
> about msi_msg_write_remap_rte().
>
> msi_msg_write_remap_rte() does take the IOMMU intremap lock.
>
> There are also existing callers of iommu_update_ire_from_msi() that
> call the functions without the pcidevs locked.  See
> hpet_msi_set_affinity() for example.

Thank you for clarifying this.

I have found another call path which causes troubles:
__pci_enable_msi[x] is called from pci_enable_msi() via vMSI, via
physdev_map_irq and also directly from ns16550 driver.

__pci_enable_msi[x] accesses pdev fields, mostly pdev->msix or
pdev->msi_list, so looks like we need pcidevs_lock(), as pdev fields are
not protected by d->pci_lock...

-- 
WBR, Volodymyr

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.