[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 02/13] vpci: use per-domain PCI lock to protect vpci structure


  • To: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 27 Jul 2023 14:42:13 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SJ3a0uXBdTHpYgXCnmACqT6AUpH6uOJ4f4rF7bN8EbQ=; b=YgbmDAuoijM6iZfkU613vxMbhhri25yLLrJK/FMYrGvh5zzh/UKuVwpCyoAcLqY8PC3L4gO13F+XLHpoA1Y98hV7MYeDUIKEykI7B5kJjbIiiJ6btI8Z/2ABQoyM4WO5lcOJYTAaPCBZKu2VY39v99qXR0oGRupBEPzFvHnsnVVinEqhi/Yo2BSJWMGWEecajleQtwX9VXZQDEf0H0rf/SseNhTfN9a4mssgsMQ7Rfdqi7q+2sEOUwl6FEeLNq006I+IM0YipLPdWarVOpHAS5Ss9gzInF+ZRF6K6Vd8uMNCNCMl5tu6Vt0158IeNQ37O+H/SbhgrZpjC587d/7VqA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BLOsyd404x6sfyyPc/1jB4AAvBV73sRzfIs2P9O7FPC0TK0J3Pgra977OXiU0/4k0WgzvgfsJlXmdkQfunfDo7mKGAKygDg6LfQzzKCQwgsduaNsueG9Bf6wrY0l/3gkHV2sbkC0BwZ7FT3IAlj2sM+B+5OUn2XbGnq4mgaUUQxK5tvQRA7DAUbdiZcWyAtDergEc/LrcFMrQQpWe/bB+QtEsNHvYPobG5gW/b0GtXu+tA0q++KnRV4z6P+397skC0KqVYVaXBa9mKv/3bI1Vn0Uma2y624F2hg1el9Aq1js+7eOlrQznzfICHwf7hm6MPFjEf3W7t9vikduVb8O5w==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Thu, 27 Jul 2023 12:42:33 +0000
  • Ironport-data: A9a23:+gKot6kchDMRalz5mPHExqvo5gxRJ0RdPkR7XQ2eYbSJt1+Wr1Gzt xIXDGyFb6yKZjTxco9xbYS29k0AuMfRyIBjS1Np/nw1QiMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE0p5K2aVA8w5ARkPqgU5gCGzhH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 cElcR9WUyKRu8e34qyXVMU8lsV9I9a+aevzulk4pd3YJdAPZMmaBo7tvJpf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVI3ieezWDbWUoXiqcF9hEGXq 3iA523kKhobKMae2XyO9XfEaurnxHqkBdtPSufknhJsqFSx9l4PB0EYbECcodKgzXXjRclOF GVBr0LCqoB3riRHVOLVYRq8p3KVuw8GbPBZGeY69QKlx7Ld5kCSAW1sZi5MbpkqudE7QRQu1 0SVhJX5CDp3qrqXRHmBsLCOoluaHiwYLnQLYyMeeiID78P+u4E4jh/JTdFLHba8i5v+HjSY6 zKAoTU6hr4TpdUWzKj99lfC6w9AvbDMRw8xow7QB2Ss61ogYJb/PtP2r1/G8fxHMYCVCEGbu 2QJkNSf6+ZICoyRkCuKQ6MGG7TBC+u5DQAwSGVHR/EJnwlBMVb5FWyMyFmS/HtUD/s=
  • Ironport-hdrordr: A9a23:JYNnsq/5Ch1ycsOCKC1uk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M 5bAsxD4QTLfDpHsfo=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Jul 27, 2023 at 12:56:54AM +0000, Volodymyr Babchuk wrote:
> Hi Roger,
> 
> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
> 
> > On Wed, Jul 26, 2023 at 01:17:58AM +0000, Volodymyr Babchuk wrote:
> >> 
> >> Hi Roger,
> >> 
> >> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
> >> 
> >> > On Thu, Jul 20, 2023 at 12:32:31AM +0000, Volodymyr Babchuk wrote:
> >> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >> >> @@ -498,6 +537,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, 
> >> >> unsigned int size,
> >> >>          ASSERT(data_offset < size);
> >> >>      }
> >> >>      spin_unlock(&pdev->vpci->lock);
> >> >> +    unlock_locks(d);
> >> >
> >> > There's one issue here, some handlers will cal pcidevs_lock(), which
> >> > will result in a lock over inversion, as in the previous patch we
> >> > agreed that the locking order was pcidevs_lock first, d->pci_lock
> >> > after.
> >> >
> >> > For example the MSI control_write() handler will call
> >> > vpci_msi_arch_enable() which takes the pcidevs lock.  I think I will
> >> > have to look into using a dedicated lock for MSI related handling, as
> >> > that's the only place where I think we have this pattern of taking the
> >> > pcidevs_lock after the d->pci_lock.
> >> 
> >> I'll mention this in the commit message. Is there something else that I
> >> should do right now?
> >
> > Well, I don't think we want to commit this as-is with a known lock
> > inversion.
> >
> > The functions that require the pcidevs lock are:
> >
> > pt_irq_{create,destroy}_bind()
> > unmap_domain_pirq()
> >
> > AFAICT those functions require the lock in order to assert that the
> > underlying device doesn't go away, as they do also use d->event_lock
> > in order to get exclusive access to the data fields.  Please double
> > check that I'm not mistaken.
> 
> You are right, all three function does not access any of PCI state
> directly. However...
> 
> > If that's accurate you will have to check the call tree that spawns
> > from those functions in order to modify the asserts to check for
> > either the pcidevs or the per-domain pci_list lock being taken.
> 
> ... I checked calls for PT_IRQ_TYPE_MSI case, there is only call that
> bothers me: hvm_pi_update_irte(), which calls IO-MMU code via
> vmx_pi_update_irte():
> 
> amd_iommu_msi_msg_update_ire() or msi_msg_write_remap_rte().

That path is only for VT-d, so strictly speaking you only need to worry
about msi_msg_write_remap_rte().

msi_msg_write_remap_rte() does take the IOMMU intremap lock.

There are also existing callers of iommu_update_ire_from_msi() that
call the functions without the pcidevs locked.  See
hpet_msi_set_affinity() for example.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.