[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 5/8] pci/arm: Use iommu_add_dt_pci_device()


  • To: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 12 May 2023 09:25:16 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VZ4UmRUFjPwOsf2YSnf6kGVLXFR7d2FX2Vj5IUKzqfk=; b=XmTXPx++3nw6FL5GZgiVIhKyqjuQ+2pq8VoFsVxR0M5LShY/FNOJvn7t2Xdkyc2b1SVubgIGA2VbKa+OAoWvZmPc9T358+r/9INTXkWW9X+RoMfNXuaD8Cm3QhqWcvciu4/srOzSQney9cjrRRr2sz8Y59488dw0+Ax3jvELk/zoPwV1cszDtN4lm6hvro1m5zprxLtV/obwNhSN8EOOe0Vu/i/Tc/tmJaHUYXqF8FisZanL38OFsl6JQhVVuJbvmj5GW24xN/PrW0bL02mmUCnPdwrEAtpBfqABEfxPu1my6wQLcP/5VrjkJcKrIHzcogLOJbfqB50lretN8lyYWA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cg7eN8w7BskBghb2WZscrW9r+CFjgpW3a0lP+ciqH+hv6+zujCQxwy+dOvnvwiOkd+Vaw/+eevi29VLOHt0WVRkIq+M87kblygXJ5efIfPAxSoaWVVH66Nr0XaEw17BObYyi7Y6x/6SLCjlykbc43gTdfRCCOhnJ5OL4oBMqR2mG4L2t2zVJ3/A7h1sgXxMTuFDyVmptVFtIWzP6wYxvpHDSGBR/4NQe9PHkD+7wdP1WexUf0Vblhd+4iNPRErYfGnl2n+Ab/Zv+qy6SE/CxaHbeL9nIV20dKw9/CpaVlSu/kNWJ56gAoWbEX1iAFnl/JcKO46MQa3boj+F7SovNCA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 12 May 2023 07:25:40 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 11.05.2023 21:16, Stewart Hildebrand wrote:
> @@ -762,9 +767,20 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>              pdev->domain = NULL;
>              goto out;
>          }
> +#ifdef CONFIG_HAS_DEVICE_TREE
> +        ret = iommu_add_dt_pci_device(pdev);
> +        if ( ret < 0 )
> +        {
> +            printk(XENLOG_ERR "pci-iommu translation failed: %d\n", ret);
> +            goto out;
> +        }
> +#endif
>          ret = iommu_add_device(pdev);

Hmm, am I misremembering that in the earlier patch you had #else to
invoke the alternative behavior? Now you end up calling both functions;
if that's indeed intended, this may still want doing differently.
Looking at the earlier patch introducing the function, I can't infer
though whether that's intended: iommu_add_dt_pci_device() checks that
the add_device hook is present, but then I didn't find any use of this
hook. The revlog there suggests the check might be stale.

If indeed the function does only preparatory work, I don't see why it
would need naming "iommu_..."; I'd rather consider pci_add_dt_device()
then. Plus in such a case #ifdef-ary here probably wants avoiding by
introducing a suitable no-op stub for the !HAS_DEVICE_TREE case. Then
...

>          if ( ret )
>          {
> +#ifdef CONFIG_HAS_DEVICE_TREE
> +            iommu_fwspec_free(pci_to_dev(pdev));
> +#endif

... this (which I understand is doing the corresponding cleanup) then
also wants wrapping in a suitably named tiny helper function.

But yet further I'm then no longer convinced this is the right place
for the addition. pci_add_device() is backing physdev hypercalls. It
would seem to me that the function may want invoking yet one layer
further up, or it may even want invoking from a brand new DT-specific
physdev-op. This would then leave at least the x86-only paths (invoking
pci_add_device() from outside of pci_physdev_op()) entirely alone.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.