[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 4/5] [FUTURE] xen/arm: enable vPCI for domUs


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Sun, 3 Dec 2023 22:54:34 -0500
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=G0YDVnMB1/O75VR68CjooSPm0O7pOT1+GJYLgTmg150=; b=hTFQMu4AMvyoE3HCS/ZWv1z0hG6oTO9bBayzK3FEhQa0h2PBSKEn4Gyogh4g+q6HRt83b856R320PqRymJKPuk02FztVZfQ+KW40act0dwrcAKtlt6s5ca4q01zFXJW86s07p/x0fdMJjJfegaZ+ClT0WoykTklKTq7J+IObgTJ9VToTivVudbAXqtkxvrLnNvEh3cRZeheB/sAKeZWkxlmUweGFqXgbsRPIJ3y3ZiYSZaMaLlnOAjMuzDhpt/LpA0tojhNXhJyOrMS3xwJWX/YOoayltPi2wawN3kLE8RzwpTBfBgF1OTfLqiU+VRpNTsaxHJv4CMaOq5yophJe4g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Np0ScdIpxaP+lB5jhqoHna/+v9p85T9o739huNzeUYSRk1LX9/EEC57oDSTW/dDK3Cn9uF2IDPPdq0EPwz0mgumLmXmwHmyY0SPa5ERwLpOqXWGS22QkDK/Xr0ss7CUHgB6TwQ9gDjmD+Wh9dLToUdngqmv3d1nkwARjhfMo5rzH6lVcL5H1DSO63LPnAZ8xoofyPw7PHRLSN4dXxXdwrbRqfgybFxMba+7b3MfsxbhB50QVz1p5xbY14bujspJ2YNoV4Q6ABFTleE3ufXYrC0RVQcvuboAwo/TC8mjcWypCrl5/xOnq9OhHQ3rAsuZJr39VMYyGLgXAl4dsjaQacQ==
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, "Bertrand Marquis" <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, "Jan Beulich" <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Mon, 04 Dec 2023 03:54:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 12/1/23 21:56, Stefano Stabellini wrote:
> On Fri, 1 Dec 2023, Roger Pau Monné wrote:
>> On Mon, Nov 13, 2023 at 05:21:13PM -0500, Stewart Hildebrand wrote:
>>> @@ -1618,6 +1630,14 @@ int iommu_do_pci_domctl(
>>>          bus = PCI_BUS(machine_sbdf);
>>>          devfn = PCI_DEVFN(machine_sbdf);
>>>  
>>> +        if ( needs_vpci(d) && !has_vpci(d) )
>>> +        {
>>> +            printk(XENLOG_G_WARNING "Cannot assign %pp to %pd: vPCI 
>>> support not enabled\n",
>>> +                   &PCI_SBDF(seg, bus, devfn), d);
>>> +            ret = -EPERM;
>>> +            break;
>>
>> I think this is likely too restrictive going forward.  The current
>> approach is indeed to enable vPCI on a per-domain basis because that's
>> how PVH dom0 uses it, due to being unable to use ioreq servers.
>>
>> If we start to expose vPCI suport to guests the interface should be on
>> a per-device basis, so that vPCI could be enabled for some devices,
>> while others could still be handled by ioreq servers.
>>
>> We might want to add a new flag to xen_domctl_assign_device (used by
>> XEN_DOMCTL_assign_device) in order to signal whether the device will
>> use vPCI.
> 
> Actually I don't think this is a good idea. I am all for flexibility but
> supporting multiple different configurations comes at an extra cost for
> both maintainers and contributors. I think we should try to reduce the
> amount of configurations we support rather than increasing them
> (especially on x86 where we have PV, PVH, HVM).
> 
> I don't think we should enable IOREQ servers to handle PCI passthrough
> for PVH guests and/or guests with vPCI. If the domain has vPCI, PCI
> Passthrough can be handled by vPCI just fine. I think this should be a
> good anti-feature to have (a goal to explicitly not add this feature) to
> reduce complexity. Unless you see a specific usecase to add support for
> it?

Just to preemptively clarify: there is a use case for passthrough (vPCI) and 
emulated virtio-pci devices (ioreq). However, the XEN_DOMCTL_assign_device 
hypercall, where this check is added, is only used for real physical hardware 
devices as far as I can tell. So I agree, I can't see a use case for passing 
through some physical devices with vPCI and some physical devices with 
qemu/ioreq.

With that said, the plumbing for a new flag does not appear to be particularly 
complex. It may actually be simpler than introducing a per-arch needs_vpci(d) 
heuristic.

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 96cb4da0794e..2c38088a4772 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1113,6 +1113,7 @@ typedef struct pci_add_state {
     libxl_device_pci pci;
     libxl_domid pci_domid;
     int retries;
+    bool vpci;
 } pci_add_state;

 static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1176,6 +1177,10 @@ static void do_pci_add(libxl__egc *egc,
         }
     }

+    if (type == LIBXL_DOMAIN_TYPE_PVH /* includes Arm guests */) {
+        pas->vpci = true;
+    }
+
     rc = 0;

 out:
@@ -1418,7 +1423,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     unsigned long long start, end, flags, size;
     int irq, i;
     int r;
-    uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
+    uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED |
+                    (pas->vpci ? XEN_DOMCTL_DEV_USES_VPCI : 0);
     uint32_t domainid = domid;
     bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 2203725a2aa6..7786da1cf1e6 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1630,7 +1630,7 @@ int iommu_do_pci_domctl(
         bus = PCI_BUS(machine_sbdf);
         devfn = PCI_DEVFN(machine_sbdf);

-        if ( needs_vpci(d) && !has_vpci(d) )
+        if ( (flags & XEN_DOMCTL_DEV_USES_VPCI) && !has_vpci(d) )
         {
             printk(XENLOG_G_WARNING "Cannot assign %pp to %pd: vPCI support 
not enabled\n",
                    &PCI_SBDF(seg, bus, devfn), d);
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 8b3ea62cae06..5735d47219bc 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -527,6 +527,7 @@ struct xen_domctl_assign_device {
     uint32_t dev;   /* XEN_DOMCTL_DEV_* */
     uint32_t flags;
 #define XEN_DOMCTL_DEV_RDM_RELAXED      1 /* assign only */
+#define XEN_DOMCTL_DEV_USES_VPCI        (1 << 1)
     union {
         struct {
             uint32_t machine_sbdf;   /* machine PCI ID of assigned device */



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.