[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[QEMU PATCH v9] xen/passthrough: use gsi to map pirq when dom0 is PVH


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Anthony PERARD <anthony@xxxxxxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, "Edgar E . Iglesias" <edgar.iglesias@xxxxxxxxx>, "Michael S . Tsirkin" <mst@xxxxxxxxxx>, "Marcel Apfelbaum" <marcel.apfelbaum@xxxxxxxxx>
  • From: Jiqian Chen <Jiqian.Chen@xxxxxxx>
  • Date: Thu, 24 Oct 2024 17:06:29 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2gbYu2XjGHbSV+SaOmQmDxswd2t/pv2OR5bJk+IJ4Ts=; b=F/07GKuAudz6PFhJd+eK4w84JFXnSuDL4DlOVpvtZ77bGZGfuDkuq5oVNTABwCsAVouA8AELv0fNQA67+w/O81aLbWx87LzQJbHgDnf5LpMLWyy5N0+6UWGgA+bnv6cvnGrjVnShkzwgJvC2ZOZYtzOZpUs5hlO7k8fFrRym78h6OXpcrhu5rfGpCZi8GVq9w2UnI02yEmEUwjl1ryIn7Sbit7ihXS3dSbRxEigo+M2VT0j9VbC2oU/8IuBD21oCZqpb1reVMskD85bxxkV2yZiP5vUOS3pZFgmUG4Pkz6q2VypRYMPvapgqOLBHXTXWf1gizkb8lliSoeMV35Kwig==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Fgpr/OBiDIe+0KVRf/zqO5HlkDeMtYbtzMaMOa7+Xg1CzY1dGGocCfyw/Yv1PT0b/1dTzTuJODgfU8p260jyGXCHYsKzYmIjes1iT2cV8zUm1wbIOQjTa8j6Pdm8y1D4qw8+0txJnemS3kidDYBKUZeYo88w0cfHH6kQaOVQ4uVeR2nU44A8pK/MKhg8F2kxyVW6XWkZBHJkOlTvowV81ZriVoOB4SjYKNefs73zdR9z5D2K3j3ow1c6iP0qp66DZnXb8+9aWDVd+Qk6l8d+JRpoY6q7w2EkTsJyRxwxI2uVpwM4WkZ0Axlc+M7MvuaXwjHXJRGayVapMgazFw4qNg==
  • Cc: <qemu-devel@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jiqian Chen <Jiqian.Chen@xxxxxxx>, Huang Rui <ray.huang@xxxxxxx>
  • Delivery-date: Thu, 24 Oct 2024 09:06:58 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

In PVH dom0, when passthrough a device to domU, QEMU code
xen_pt_realize->xc_physdev_map_pirq wants to use gsi, but in current codes
the gsi number is got from file /sys/bus/pci/devices/<sbdf>/irq, that is
wrong, because irq is not equal with gsi, they are in different spaces, so
pirq mapping fails.

To solve above problem, use new interface of Xen, xc_pcidev_get_gsi to get
gsi and use xc_physdev_map_pirq_gsi to map pirq when dom0 is PVH.

Signed-off-by: Jiqian Chen <Jiqian.Chen@xxxxxxx>
Signed-off-by: Huang Rui <ray.huang@xxxxxxx>
Signed-off-by: Jiqian Chen <Jiqian.Chen@xxxxxxx>
---
Hi All,
This is v9 to support passthrough on Xen when dom0 is PVH.
v8->v9 changes:
* Moved the definition of PCI_SBDF from /hw/xen/xen_pt.c to 
/include/hw/pci/pci.h.
* Renamed xen_run_qemu_on_hvm to xen_pt_need_gsi.
* Renamed xen_map_pirq_for_gsi to xen_pt_map_pirq_for_gsi.
* Through reading /sys/hypervisor/guest_type to get dom type instead of using 
xc_domain_getinfo_single.

Best regards,
Jiqian Chen

v7->v8 changes:
* Since xc_physdev_gsi_from_dev was renamed to xc_pcidev_get_gsi, changed it.
* Added xen_run_qemu_on_hvm to check if Qemu run on PV dom0, if not use 
xc_physdev_map_pirq_gsi to map pirq.
* Used CONFIG_XEN_CTRL_INTERFACE_VERSION to wrap the new part for compatibility.
* Added "#define DOMID_RUN_QEMU 0" to represent the id of domain that Qemu run 
on.

v6->v7 changes:
* Because the function of obtaining gsi was changed on the kernel and Xen side. 
Changed to use
  xc_physdev_gsi_from_dev, that requires passing in sbdf instead of irq.

v5->v6 changes:
* Because the function of obtaining gsi was changed on the kernel and Xen side. 
Changed to use
  xc_physdev_gsi_from_irq, instead of gsi sysfs.
* Since function changed, removed the Review-by of Stefano.

v4->v5 changes:
* Added Review-by Stefano.

v3->v4 changes:
* Added gsi into struct XenHostPCIDevice and used gsi number that read from gsi 
sysfs
  if it exists, if there is no gsi sysfs, still use irq.

v2->v3 changes:
* Due to changes in the implementation of the second patch on kernel side(that 
adds
  a new sysfs for gsi instead of a new syscall), so read gsi number from the 
sysfs of gsi.

v1 and v2:
We can record the relation between gsi and irq, then when userspace(qemu) want
to use gsi, we can do a translation. The third patch of kernel(xen/privcmd: Add 
new syscall
to get gsi from irq) records all the relations in acpi_register_gsi_xen_pvh() 
when dom0
initialize pci devices, and provide a syscall for userspace to get the gsi from 
irq. The
third patch of xen(tools: Add new function to get gsi from irq) add a new 
function
xc_physdev_gsi_from_irq() to call the new syscall added on kernel side.
And then userspace can use that function to get gsi. Then xc_physdev_map_pirq() 
will success.

Issues we encountered:
1. failed to map pirq for gsi
Problem: qemu will call xc_physdev_map_pirq() to map a passthrough device's gsi 
to pirq in
function xen_pt_realize(). But failed.

Reason: According to the implement of xc_physdev_map_pirq(), it needs gsi 
instead of irq,
but qemu pass irq to it and treat irq as gsi, it is got from file
/sys/bus/pci/devices/xxxx:xx:xx.x/irq in function xen_host_pci_device_get(). 
But actually
the gsi number is not equal with irq. They are in different space.
---
 hw/xen/xen_pt.c      | 53 ++++++++++++++++++++++++++++++++++++++++++++
 include/hw/pci/pci.h |  4 ++++
 2 files changed, 57 insertions(+)

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 3635d1b39f79..5b10d501d566 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -766,6 +766,50 @@ static void xen_pt_destroy(PCIDevice *d) {
 }
 /* init */
 
+#if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 42000
+static bool xen_pt_need_gsi(void)
+{
+    FILE *fp;
+    int len;
+    char type[10];
+    const char *guest_type = "/sys/hypervisor/guest_type";
+
+    fp = fopen(guest_type, "r");
+    if (fp == NULL) {
+        error_report("Cannot open %s: %s", guest_type, strerror(errno));
+        return false;
+    }
+    fgets(type, sizeof(type), fp);
+    fclose(fp);
+
+    len = strlen(type);
+    if (len) {
+        type[len - 1] = '\0';
+        if (!strcmp(type, "PVH")) {
+            return true;
+        }
+    }
+    return false;
+}
+
+static int xen_pt_map_pirq_for_gsi(PCIDevice *d, int *pirq)
+{
+    int gsi;
+    XenPCIPassthroughState *s = XEN_PT_DEVICE(d);
+
+    gsi = xc_pcidev_get_gsi(xen_xc,
+                            PCI_SBDF(s->real_device.domain,
+                                     s->real_device.bus,
+                                     s->real_device.dev,
+                                     s->real_device.func));
+    if (gsi >= 0) {
+        return xc_physdev_map_pirq_gsi(xen_xc, xen_domid, gsi, pirq);
+    }
+
+    return gsi;
+}
+#endif
+
 static void xen_pt_realize(PCIDevice *d, Error **errp)
 {
     ERRP_GUARD();
@@ -847,7 +891,16 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
         goto out;
     }
 
+#if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 42000
+    if (xen_pt_need_gsi()) {
+        rc = xen_pt_map_pirq_for_gsi(d, &pirq);
+    } else {
+        rc = xc_physdev_map_pirq(xen_xc, xen_domid, machine_irq, &pirq);
+    }
+#else
     rc = xc_physdev_map_pirq(xen_xc, xen_domid, machine_irq, &pirq);
+#endif
+
     if (rc < 0) {
         XEN_PT_ERR(d, "Mapping machine irq %u to pirq %i failed, (err: %d)\n",
                    machine_irq, pirq, errno);
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index eb26cac81098..07805aa8a5f3 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -23,6 +23,10 @@ extern bool pci_available;
 #define PCI_SLOT_MAX            32
 #define PCI_FUNC_MAX            8
 
+#define PCI_SBDF(seg, bus, dev, func) \
+            ((((uint32_t)(seg)) << 16) | \
+            (PCI_BUILD_BDF(bus, PCI_DEVFN(dev, func))))
+
 /* Class, Vendor and Device IDs from Linux's pci_ids.h */
 #include "hw/pci/pci_ids.h"
 
-- 
2.34.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.