[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/3] PCI: fold pci_get_pdev{,_by_domain}()


  • To: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Thu, 11 Aug 2022 13:21:29 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xgkDYEMjAbrJCrSSl6k4kjK9yxklbaqz3zx8hlWzobo=; b=ehVbpyR3ybig840JR/52Pq9LTSvdKkuYeamTtr516uj21pTMqKlR48hUZpbARMQ6aVwbHJVDAQGWm4Giwf3o36PTswtZBANa8ZT2FzY56VkvTx9XQ7V4C60SmDRB7t0ZxSBqBPFTwHa2r3E+4A4QVA08lFXarNSS/qbyGk4gSLgWXwpfntM5OgNVG2mbZHr4TKlgUlWbZ/j0NHLTvYqFCxTJdzA9UZMci4lECkLZHVtwtaiUxFsZaXZMZ+ZnCCS0VYvdcoGcK2ha4lJr+LTLWX1NA3EG0JQDI8VcQeXDhUWL16TU4s8PpNRVsp5ZMtUjPcoybRD3oINlClwF08D+mg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AZlurRFNs2dOlut03nSceF7h7712H50BTPFixKYJCC+4/9+BKyYDflCvjKYajP3K1Fkz4Vn+h5tK7bNRENoChcRuFskQ1aql01W9B9cDoEqlExmQZQ+yn6yboc0IMAtH+dYOjKjdBahLMX384iN642Tunqa/O3ozS9eeSI9qUoanPSML41tYsqQSLcLYGjg9nSsmAlxMqfHdceJTKYd15MTzInjaUE6dimblNOKgOOv+Lqxfd/q1QMpG+FdlkwW5WoqD3N8a7Y8407yWsq3hhsOYkHF/NgUp0BEJX7n58tQ9oK8r8zMVQhqoCwrRiBmyumOPDJCciBVOIJtn3LYzkg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Paul Durrant <paul@xxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, Rahul Singh <Rahul.Singh@xxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 11 Aug 2022 13:21:43 +0000
  • Ironport-data: A9a23:1DFBz6l8H6m3hvyUSMqhZjHo5gxjJ0RdPkR7XQ2eYbSJt1+Wr1Gzt xIcUGuCbvqLYjf8ctlyYI6/8koBvcOHn9I1SAZorShjEyMWpZLJC+rCIxarNUt+DCFioGGLT Sk6QoOdRCzhZiaE/n9BCpC48T8mk/ngqoPUUIbsIjp2SRJvVBAvgBdin/9RqoNziJ2yDhjlV ena+qUzA3f4nW8vWo4ow/jb8kk37K2r4GpwUmEWPpingnePzxH5M7pHTU2BByOQapVZGOe8W 9HCwNmRlo8O105wYj8Nuu+TnnwiGtY+DyDX4pZlc/HKbix5jj4zys4G2M80Mi+7vdkrc+dZk 72hvbToIesg0zaldO41C3G0GAkmVUFKFSOuzdFSfqV/wmWfG0YAzcmCA2kIBIQq+v52JFpH6 OE+MGA8chWtitq5lefTpulE3qzPLeHNFaZG4jRF8mucCvwrB5feX6/N+NlUmi8qgdxDFurfY MxfbidzaBPHYFtEPVJ/5JAWxb/0wCWgNWAF7gvN+MLb4ECKpOB1+JHrPMDYZZqhQsJNk1zDj mnH4374ElcRM9n3JT+trSL337WXwn6TtIQ6P7e1ptkyrVGo5EtIOUQOeQvqjKKHsxvrMz5YA wlOksY0loAw/kG2Stj2XzWjvWWJ+BUbXrJ4A+A8rQ2A1KfQywKYHXQfCC5MbsQ8s807TiBs0 UWG9/vxDCFrmK2YTzSa7Lj8hQm7OTURa1QDYyAEZQIf5p/op4RbphDSStduFoalg9uzHiv/q xiRtzQ3jbgXic8N1o248ErBjjbqoYLGJiYq4i3HU2Tj6Rl2DLNJfKSt4FnfqPNfdoCQSwDZu GBewpDDqucTEZuKiSqBBv0XG62k7OqENzuahkNzG54m9HKm/HvLkZ1s3QyS7XxBaq4sEQIFq meJ0e+NzPe/5EeXUJI=
  • Ironport-hdrordr: A9a23:QjK7naoNzXGgFbqQd438e34aV5sDLNV00zEX/kB9WHVpm5Oj+v xGzc5w6farsl0ssSkb6Ku90KnpewK+yXbsibNhcItKLzOWwldAS7sSobcKogeQUREWk9Qw6U 4OSdkYNDSdNzlHZIPBkXGF+rUbsZa6GcKT9IHjJh5WJGkEBZ2IrT0JczpzeXcGJjWucKBJcK Z0kfA3wgZIF052Uu2LQl0+G8TTrdzCk5zrJTYAGh4c8QGLyR+49bLgFBCc/xEGFxdC260r/2 TpmxHwovzLiYD79jbsk0voq7hGktrozdVOQOSKl8guMz3pziq4eYh7XLWGnTYt5MWi8kwjnt Xgqwope+5z93TSVGeopgaF4Xiv7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBVys twriGknqsSKSmFsDX25tDOWR0vvFGzu2AenekaiGEaeZcCaYVWsZcU8CpuYdU99WPBmcUa+d tVfYbhDcVtABWnhrfizzBSKemXLzAO99G9MxA/U4KuomNrdTtCvjYlLYQk7ws9HdQGOtl5Dq 3/Q9pVfPsldL5oUYttQOgGWse5EWrLXFbFN3+TO03uEOUdN2vKsIOf2sR92AiGQu1+8HIJou W2bHpI8WopP07+A8yH25NGthjLXWWmRDzojsVT/YJwtLHwTKfidXTrciFkr+Kw5/EERsHLUf e6P5xbR/flMGv1AI5MmwnzQYNbJ3USWNAc/tw7R1WNqMTWLZCCjJ2STN/DYL72VTo0UGL2BX UOGDD1OcVb90iuHmT1hRDAMkmdDnAXPagAZZQy09JjuLTlbLc8wzT9oW7Jlf2jOHlFrrE8el d4Lffujr67zFPGj1r10w==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYrXBuhfhLVAQnY0WV4dbewZaCI62pr6eA
  • Thread-topic: [PATCH v2 2/3] PCI: fold pci_get_pdev{,_by_domain}()

On 11/08/2022 11:52, Jan Beulich wrote:
> Rename the latter, subsuming the functionality of the former when passed
> NULL as first argument.
>
> Since this requires touching all call sites anyway, take the opportunity
> and fold the remaining three parameters into a single pci_sbdf_t one.
>
> No functional change intended. In particular the locking related
> assertion needs to continue to be kept silent when a non-NULL domain
> pointer is passed - both vpci_read() and vpci_write() call the function
> without holding the lock (adding respective locking to vPCI [or finding
> an alternative to doing so] is the topic of a separate series).
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> v2: New.
>
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2162,7 +2162,7 @@ int map_domain_pirq(
>          if ( !cpu_has_apic )
>              goto done;
>  
> -        pdev = pci_get_pdev_by_domain(d, msi->seg, msi->bus, msi->devfn);
> +        pdev = pci_get_pdev(d, PCI_SBDF(msi->seg, msi->bus, msi->devfn));

Oh, I should really have read this patch before trying to do the sbdf
conversion in patch 1.

However, it occurs to me that this:

diff --git a/xen/arch/x86/include/asm/msi.h b/xen/arch/x86/include/asm/msi.h
index 117379318f2c..6f0ab845017c 100644
--- a/xen/arch/x86/include/asm/msi.h
+++ b/xen/arch/x86/include/asm/msi.h
@@ -59,9 +59,14 @@
 #define FIX_MSIX_MAX_PAGES              512
 
 struct msi_info {
-    u16 seg;
-    u8 bus;
-    u8 devfn;
+    union {
+        struct {
+            u8 devfn;
+            u8 bus;
+            u16 seg;
+        };
+        pci_sbdf_t sbdf;
+    };
     int irq;
     int entry_nr;
     uint64_t table_base;

will simplify several hunks in this patch, because you can just pass
msi->sbdf rather than reconstructing it by reversing 32 bits worth of
data from their in-memory representation.

Preferably with something to this effect included, Reviewed-by: Andrew
Cooper <andrew.cooper3@xxxxxxxxxx>

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.