[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 08/11] xen/arm: Enable the existing x86 virtual PCI support for ARM.


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Wed, 13 Oct 2021 10:02:01 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rDr41HcM0x0JwfgkdMC8DHHsRnkRZDVRMQ66f2DsX4g=; b=LRNBa9nn6GaY4le8/LspPY+Ojuf9Q9th5tmRsfi2gcaABwAAeT7brJcrai4nAYeqVX9y6qLzSbiLWPIt8p0YwRGioayjNKTWAgR86+m6RSyk4cBjAZrFntUWJBgHUG3sgIPq/1neUXo0OlF6hMiANyFFAPWtRRFe7yrBcJwQP6uhc1Tj237j3EWae3at6FKfgTWLrqubPKIFRNH8CfjlBL9lrMvllpPpGa+JrLKzzEsYhpQ3RZU9K0xqa2QKX7DFCdFz2JR+g20OBgQiRdOPs3iwKDaw9yqVO0kGmZlE/IZ25ZWf7p87Ce2jh0pw9wE8RmDLzCPVMddIic6nR7e3lw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Zk53OGEMmGIG+uPtFcRzBzbp4fPDdK5iXTmyJdBkwF04T6/z861V9pw+vvJbBWf/eRY6x7OoZkaeY+uH4z938pBWmB1bk8s+dRVJSHiOTu/ePlz0eMJ8lNI6wJC2WbZ43CKzgE24KgqoIYRwml3ZGaVPWU430U1Fjk/vARSH1aOSfBU8CuIHWlZ/8pyEzb0qrFzqMTQGpeVrWierQxM3g1V52S4CDv9/dLeN4CpqfVhZ9QDclPBGKs0aseanGGiEc3gYxXzeN+gxDw30qcFctB7vg8spX3JRG8nvBMOQuUNiZdnx8tzyuLXihYhmqrubfJ5zn1bEK+0Awlx/jC3V5Q==
  • Authentication-results-original: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Rahul Singh <Rahul.Singh@xxxxxxx>, Andre Przywara <Andre.Przywara@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 13 Oct 2021 10:02:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXutl3YqS5dHXsYk6KpdKt4d14RavHjPqAgAY38oCAAAe6AIAAByIAgAAKAoCAAEVOAIAA5tqAgADjBgCAAI9hAIAAQK4A
  • Thread-topic: [PATCH v5 08/11] xen/arm: Enable the existing x86 virtual PCI support for ARM.

Hi Jan,

> On 13 Oct 2021, at 07:10, Jan Beulich <jbeulich@xxxxxxxx> wrote:
> 
> On 12.10.2021 23:37, Stefano Stabellini wrote:
>> On Tue, 12 Oct 2021, Jan Beulich wrote:
>>> On 11.10.2021 20:18, Stefano Stabellini wrote:
>>>> On Mon, 11 Oct 2021, Jan Beulich wrote:
>>>>> On 11.10.2021 15:34, Bertrand Marquis wrote:
>>>>>>> On 11 Oct 2021, at 14:09, Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>>> On 11.10.2021 14:41, Bertrand Marquis wrote:
>>>>>>>> But digging deeper into this, I would have 2 questions:
>>>>>>>> 
>>>>>>>> - msi_cleanup was done there after a request from Stefano, but is not
>>>>>>>> done in case or iommu error, is there an issue to solve here ?
>>>>>>> 
>>>>>>> Maybe, but I'm not sure. This very much depends on what a domain
>>>>>>> could in principle do with a partly set-up device. Plus let's
>>>>>>> not forget that we're talking of only Dom0 here (for now at least,
>>>>>>> i.e. not considering the dom0less case).
>>>>>>> 
>>>>>>> But I'd also like to further defer to Stefano.
>>>>>> 
>>>>>> Ok, I must admit I do not really see at that stage why doing an MSI 
>>>>>> cleanup
>>>>>> could be needed so I will wait for Stefano to know if I need to keep 
>>>>>> this when
>>>>>> moving the block up (at the end it is theoretical right now as this is 
>>>>>> empty).
>>>> 
>>>> I know that MSIs are not supported yet on ARM (pci_cleanup_msi does
>>>> nothing). But I wanted to make sure that the pci_cleanup_msi() calls are
>>>> present anywhere necessary, especially on the error paths. So that once
>>>> we add MSI support, we don't need to search through the code to find all
>>>> the error paths missing a pci_cleanup_msi() call.
>>>> 
>>>> To answer your first question: you are right, we are also missing a
>>>> pci_cleanup_msi() call in the case of IOMMU error. So it might be better
>>>> to move the call to pci_cleanup_msi() under the "out" label so that we
>>>> can do it once for both cases.
>>>> 
>>>> To answer your second point about whether it is necessary at all: if
>>>> MSIs and MSI-Xs cannot be already setup at this point at all (not even
>>>> the enable bit), then we don't need any call to pci_cleanup_msi() in
>>>> pci_add_device.
>>> 
>>> Well, at the very least MSI can't be set up ahead of the traps getting
>>> put in place. Whether partial success of putting traps in place may
>>> allow a cunning guest to set up MSI may depend on further aspects.
>> 
>> Good point about MSIs not being setup before the traps. We should remove
>> the call to pci_cleanup_msi() in the error path then.
> 
> Your reply makes me fear you didn't pay enough attention to the "partial"
> in my earlier reply. The traps for the various registers can't all be set
> up atomically, so there may be a transient period where enough traps are
> already in place for a cunning guest to arrange for setup. Unless, as
> said, there are further setup steps needed before a guest could succeed
> in doing so.
> 
> But even if partial trap setup alone was sufficient, I think the cleaning
> up of MSI then might still better go on the error path there than on that
> of pci_add_device().

I think I should put the msi_cleanup in the exit path if pdev is not null but
we got a non null ret (in an else if ( pdev ) ).
This would cover all exit paths, especially as I will move the add_handler
before the iommu init.

Would that be ok for everyone ?

Cheers
Bertrand




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.