[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v4 06/10] vpci: Hide extended capability when it fails to initialize


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jiqian Chen <Jiqian.Chen@xxxxxxx>
  • Date: Fri, 9 May 2025 17:05:38 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZfFOB0Mnzec6rTpJsTgNXsMjGaiorSO7uuWfsMiLw/o=; b=VYbLT4guAcGya/GK25qmZ/S8+EoLcBN9sROJ0PSa0rWgltdGP+3zrdcYoty8hC1eT5b/5oJB2+tko3Qs4Y1isyvFYimuCP/AM13VK6bNIAq66jJ2LoGOawe+B6uu43330n9Ont8TgV6mOLsNh7CKzzoyDDrUh59TwabrmCxgsvgWl5YaSV0z8CiWBItRt2blfp7OBPMpJQLNkMGtki1gBNWjQzLXTTGQLCoDNUC10LUlodD4Zyk0cNmIrywQstAQ+bCeA9vJbxcxWiNKCaChIAVrkD566/L58To/LiuX9GEmI256D34koAeDTFOGwl4sQvSaasKyrDzAxKYAK6XiiA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=EEpoZ/dSZji/WW+SrbqHy3SyTh+LnAI+Y9eUX2JmMRhLOxtrM/+uwa84bD4hqsktHJRwzpLM/vTFdwSw6flQyQ++h4UmDP498GqdT9Scbqk1p1dBbRNt8Iid2BQ8/22HmOZXb53Nl6VJNfv5Jqcnskug+eMWP4Keed5NRLHTg2BKWFYgf/EzHCMvpG3rc2/bPfcirSyvQo2+blDo1v8cSMdweR8qL6LcroK3NzciuuLOCvQNBmiQ5oXnL1o9GLbrRukkRfzeZ+Rc2+EVIy8Gj3wqD1TQqf/nc22QBAFY+ORToQmloLpUCe4vv6ZL/qr2VXZwzL9lSK98ZrYtZwBO0A==
  • Cc: Huang Rui <ray.huang@xxxxxxx>, Jiqian Chen <Jiqian.Chen@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 09 May 2025 09:06:17 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

When vpci fails to initialize a extended capability of device, it
just returns an error and vPCI gets disabled for the whole device.

So, add function to hide extended capability when initialization
fails. And remove the failed extended capability handler from vpci
extended capability list.

Signed-off-by: Jiqian Chen <Jiqian.Chen@xxxxxxx>
---
cc: "Roger Pau Monné" <roger.pau@xxxxxxxxxx>
---
v3->v4 changes:
* Change definition of PCI_EXT_CAP_NEXT to be "#define PCI_EXT_CAP_NEXT(header) 
(MASK_EXTR(header, PCI_EXT_CAP_NEXT_MASK) & 0xFFCU)" to avoid redundancy.
* Modify the commit message.
* Change vpci_ext_capability_mask() to return error instead of using ASSERT.
* Set the capability ID part to be zero when we need to hide the capability of 
position 0x100U.
* Add check "if ( !offset )" in vpci_ext_capability_mask().

v2->v3 changes:
* Separated from the last version patch "vpci: Hide capability when it fails to 
initialize".
* Whole implementation changed because last version is wrong.
  This version gets target handler and previous handler from vpci->handlers, 
then remove the target.
* Note: a case in function vpci_ext_capability_mask() needs to be discussed,
  because it may change the offset of next capability when the offset of target
  capability is 0x100U(the first extended capability), my implementation is 
just to
  ignore and let hardware to handle the target capability.

v1->v2 changes:
* Removed the "priorities" of initializing capabilities since it isn't used 
anymore.
* Added new function vpci_capability_mask() and vpci_ext_capability_mask() to
  remove failed capability from list.
* Called vpci_make_msix_hole() in the end of init_msix().

Best regards,
Jiqian Chen.
---
 xen/drivers/vpci/vpci.c    | 100 +++++++++++++++++++++++++++++++++++--
 xen/include/xen/pci_regs.h |   5 +-
 2 files changed, 100 insertions(+), 5 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index e1d4e9aa9b88..76d663753e7b 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -193,6 +193,98 @@ static int vpci_capability_mask(struct pci_dev *pdev, 
unsigned int cap)
     return 0;
 }
 
+static struct vpci_register *vpci_get_previous_ext_cap_register(
+    struct vpci *vpci, unsigned int offset)
+{
+    uint32_t header;
+    unsigned int pos = PCI_CFG_SPACE_SIZE;
+    struct vpci_register *r;
+
+    if ( offset <= PCI_CFG_SPACE_SIZE )
+    {
+        ASSERT_UNREACHABLE();
+        return NULL;
+    }
+
+    r = vpci_get_register(vpci, pos, 4);
+    if ( !r )
+        return NULL;
+
+    header = (uint32_t)(uintptr_t)r->private;
+    pos = PCI_EXT_CAP_NEXT(header);
+    while ( pos > PCI_CFG_SPACE_SIZE && pos != offset )
+    {
+        r = vpci_get_register(vpci, pos, 4);
+        if ( !r )
+            return NULL;
+        header = (uint32_t)(uintptr_t)r->private;
+        pos = PCI_EXT_CAP_NEXT(header);
+    }
+
+    if ( pos <= PCI_CFG_SPACE_SIZE )
+        return NULL;
+
+    return r;
+}
+
+static int vpci_ext_capability_mask(struct pci_dev *pdev, unsigned int cap)
+{
+    const unsigned int offset = pci_find_ext_capability(pdev->sbdf, cap);
+    struct vpci_register *rm, *prev_r;
+    struct vpci *vpci = pdev->vpci;
+    uint32_t header, pre_header;
+
+    if ( !offset )
+    {
+        ASSERT_UNREACHABLE();
+        return 0;
+    }
+
+    spin_lock(&vpci->lock);
+    rm = vpci_get_register(vpci, offset, 4);
+    if ( !rm )
+    {
+        spin_unlock(&vpci->lock);
+        return -ENODEV;
+    }
+
+    header = (uint32_t)(uintptr_t)rm->private;
+    if ( offset == PCI_CFG_SPACE_SIZE )
+    {
+        if ( PCI_EXT_CAP_NEXT(header) <= PCI_CFG_SPACE_SIZE )
+            rm->private = (void *)(uintptr_t)0;
+        else
+            /*
+             * If this case removes target capability of position 0x100U, then
+             * it needs to move the next capability to be in position 0x100U,
+             * that would cause the offset of next capability in vpci different
+             * from the hardware, then cause error accesses, so here chooses to
+             * set the capability ID part to be zero.
+             */
+            rm->private = (void *)(uintptr_t)(header &
+                                              ~PCI_EXT_CAP_ID(header));
+
+        spin_unlock(&vpci->lock);
+        return 0;
+    }
+
+    prev_r = vpci_get_previous_ext_cap_register(vpci, offset);
+    if ( !prev_r )
+    {
+        spin_unlock(&vpci->lock);
+        return -ENODEV;
+    }
+
+    pre_header = (uint32_t)(uintptr_t)prev_r->private;
+    prev_r->private = (void *)(uintptr_t)((pre_header &
+                                           ~PCI_EXT_CAP_NEXT_MASK) |
+                                          (header & PCI_EXT_CAP_NEXT_MASK));
+    list_del(&rm->node);
+    spin_unlock(&vpci->lock);
+    xfree(rm);
+    return 0;
+}
+
 static int vpci_init_capabilities(struct pci_dev *pdev)
 {
     for ( unsigned int i = 0; i < NUM_VPCI_INIT; i++ )
@@ -225,11 +317,11 @@ static int vpci_init_capabilities(struct pci_dev *pdev)
                    pdev->domain, &pdev->sbdf,
                    is_ext ? "extended" : "legacy", cap, rc);
             if ( !is_ext )
-            {
                 rc = vpci_capability_mask(pdev, cap);
-                if ( rc )
-                    return rc;
-            }
+            else
+                rc = vpci_ext_capability_mask(pdev, cap);
+            if ( rc )
+                return rc;
         }
     }
 
diff --git a/xen/include/xen/pci_regs.h b/xen/include/xen/pci_regs.h
index 27b4f44eedf3..e62bf72ab3d3 100644
--- a/xen/include/xen/pci_regs.h
+++ b/xen/include/xen/pci_regs.h
@@ -448,7 +448,10 @@
 /* Extended Capabilities (PCI-X 2.0 and Express) */
 #define PCI_EXT_CAP_ID(header)         ((header) & 0x0000ffff)
 #define PCI_EXT_CAP_VER(header)                (((header) >> 16) & 0xf)
-#define PCI_EXT_CAP_NEXT(header)       (((header) >> 20) & 0xffc)
+#define PCI_EXT_CAP_NEXT_MASK          0xFFF00000U
+/* Bottom two bits of next capability position are reserved. */
+#define PCI_EXT_CAP_NEXT(header) \
+    (MASK_EXTR(header, PCI_EXT_CAP_NEXT_MASK) & 0xFFCU)
 
 #define PCI_EXT_CAP_ID_ERR     1
 #define PCI_EXT_CAP_ID_VC      2
-- 
2.34.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.