[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 02/13] vpci: use per-domain PCI lock to protect vpci structure


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Thu, 27 Jul 2023 15:13:59 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6NO0ttIsggy03SZODljZs47D0HqdhrvpF55GjVMTIcE=; b=H4HuxknyLyCWKOqnuCKQWLde1RJ3u0fsQuEfsIZnOcao3ku03RurQZfBsWjfgdGl33sf2Xjw6sCCRDgpBG71xh5uyu2qKhclVcl1UtwGriLaXnxZPhoWt4ZqFMi3MYcJMtSgta98P0kV02OzfkPcu3IJGFqkxfuhd/fgIOVR+gytfQgrW0hi52M+1e+wE4aDJhnUGM5+SxqjuHJTR7ppemnTVi7OjdcqF1A71PobcW9ZC4+nH+6EhI0J+HPyXLNvR6FWF6sOlFTAx6sso928IoQy7vjLv5L9zG9ekvjIuxKD/50QvnaNh+bw2cgJl9y6aVvhhza0saE963eDRu6OzA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ae/V+01rN2l4UBH1fLKtTtWlJMk85Ew63NGITrf50vQ9s8Ga73NJ+z1OsbTSfQScHQ/fdXvGvXjcIOMLmebwQ1Tw2TTWWmjS+cu3mGHBZhJMUCdXCEIhErDzUrBEKpzr8Rmc8GzD+CbjWCdqE9Nz25eCnTmMjqIMw5AkGI3QaFodk1CMd8vqfCykowpiPFN98n7Dovek8pFzBB5HfhEEtCaLoLN0VNXNa+ljbufFqELV8Nipos0iip/K0FKBYT+Y6WVIOEWQT9GJD9lgDPLPFGda84cGDVGt2Oi8DRdWP2FG+G2hrPixH1OnHNf2HTqJWxG1t/XF4mD7cGXYsKUpSw==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 27 Jul 2023 15:14:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZuqGrLMJPz1ipBEu4urExXp8yi6/CgxUAgAWEoICAA8vwgIAA+VsAgAB5WACAAC7UgIAAEweAgAA5kgA=
  • Thread-topic: [PATCH v8 02/13] vpci: use per-domain PCI lock to protect vpci structure

Jan,

Jan Beulich <jbeulich@xxxxxxxx> writes:

> On 27.07.2023 12:31, Volodymyr Babchuk wrote:
>> 
>> Hi Jan
>> 
>> Jan Beulich <jbeulich@xxxxxxxx> writes:
>> 
>>> On 27.07.2023 02:56, Volodymyr Babchuk wrote:
>>>> Hi Roger,
>>>>
>>>> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
>>>>
>>>>> On Wed, Jul 26, 2023 at 01:17:58AM +0000, Volodymyr Babchuk wrote:
>>>>>>
>>>>>> Hi Roger,
>>>>>>
>>>>>> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
>>>>>>
>>>>>>> On Thu, Jul 20, 2023 at 12:32:31AM +0000, Volodymyr Babchuk wrote:
>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>>>>>>>> @@ -498,6 +537,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, 
>>>>>>>> unsigned int size,
>>>>>>>>          ASSERT(data_offset < size);
>>>>>>>>      }
>>>>>>>>      spin_unlock(&pdev->vpci->lock);
>>>>>>>> +    unlock_locks(d);
>>>>>>>
>>>>>>> There's one issue here, some handlers will cal pcidevs_lock(), which
>>>>>>> will result in a lock over inversion, as in the previous patch we
>>>>>>> agreed that the locking order was pcidevs_lock first, d->pci_lock
>>>>>>> after.
>>>>>>>
>>>>>>> For example the MSI control_write() handler will call
>>>>>>> vpci_msi_arch_enable() which takes the pcidevs lock.  I think I will
>>>>>>> have to look into using a dedicated lock for MSI related handling, as
>>>>>>> that's the only place where I think we have this pattern of taking the
>>>>>>> pcidevs_lock after the d->pci_lock.
>>>>>>
>>>>>> I'll mention this in the commit message. Is there something else that I
>>>>>> should do right now?
>>>>>
>>>>> Well, I don't think we want to commit this as-is with a known lock
>>>>> inversion.
>>>>>
>>>>> The functions that require the pcidevs lock are:
>>>>>
>>>>> pt_irq_{create,destroy}_bind()
>>>>> unmap_domain_pirq()
>>>>>
>>>>> AFAICT those functions require the lock in order to assert that the
>>>>> underlying device doesn't go away, as they do also use d->event_lock
>>>>> in order to get exclusive access to the data fields.  Please double
>>>>> check that I'm not mistaken.
>>>>
>>>> You are right, all three function does not access any of PCI state
>>>> directly. However...
>>>>
>>>>> If that's accurate you will have to check the call tree that spawns
>>>>> from those functions in order to modify the asserts to check for
>>>>> either the pcidevs or the per-domain pci_list lock being taken.
>>>>
>>>> ... I checked calls for PT_IRQ_TYPE_MSI case, there is only call that
>>>> bothers me: hvm_pi_update_irte(), which calls IO-MMU code via
>>>> vmx_pi_update_irte():
>>>>
>>>> amd_iommu_msi_msg_update_ire() or msi_msg_write_remap_rte().
>>>>
>>>> Both functions read basic pdev fields like sbfd or type. I see no
>>>> problem there, as values of those fields are not supposed to be changed.
>>>
>>> But whether fields are basic or will never change doesn't matter when
>>> the pdev struct itself suddenly disappears.
>> 
>> This is not a problem, as it is expected that d->pci_lock is being held,
>> so pdev structure will not disappear. I am trying to answer another
>> question: is d->pci_lock enough or pcidevs_lock is also should required?
>
> To answer such questions, may I ask that you first firmly write down
> (and submit) what each of the locks guards?

I can do this for a newly introduced lock. So domain->pci_lock guards:

1. domain->pcidevs_list. This means that PCI devices can't be added to
or removed from a domain, when the lock is taken in read mode. As a
byproduct, any pdev assigned to a domain can't be deleted because we
need to deassign it first. To modify domain->pcidevs_list we need to
hold both d->pci_lock in write mode and pcidevs_lock.

2. Presence of pdev->vpci struct for any pdev assigned to a domain. The
structure itself is locked by pdev->vpci->lock. But to add/remove
pdev->vpci itself we need to hold d->pci_lock in the write mode.

As for pcidevs_lock, AFAIK, there is no strictly written rules, what is
exactly is protected by this lock.

-- 
WBR, Volodymyr

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.