[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/3] [FUTURE] xen/arm: enable vPCI for domUs


  • To: Julien Grall <julien@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 7 Jul 2023 13:34:20 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DurWbeeyEKMpKvrrR82mtF34tYjFqnwVfv5vmBT6u2k=; b=aZxDfAk5Y5hugShgIJixlLZpLcU5aWexSpVlkcXugr0Qt1Oy8tBy4ccYXVNnuWMx/vEa3jNAROEStb+zA1ku2CqR4ylQ8yhZJLQhiKQGkdvHl21J3n4ecnYQ0P3PSsLCgthch6/EkFDBr7rNHUaTgxTPXn6e1hRXR5FHVb9NsYaRVvadoMzbEETzzpV51q3hjyq4UYE0+/K/E/5HyM6tlxwFWj8SHBkG4QdioYuyHnh4s7lERSQt0Ct+ZW6ZOyoWgdSLEo9t1tU8pbo97bePu39GM3Myhjbb3rPaP1KA5R649lbXjRHMb6BLoXzHyDNYXjxY+NJpCeAJEEtIjzhc7w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d1DvLGLQmrK1kzGnEoCTq6Pi2r78R9jzJwwEGwQ4lFQdfpOhriOtxSotvafSOf3MGzd9PYm4kOWRCzwLwuazRpl+HgqMmAMXnXJ/4B7+C/S4jOkT9+qWeOe0yOFFeyVaXH8fbjTfvzSatgHIElaXLhaK7QUbfgGp2SxvehDCR7rywuX+RRKvMEY+RM5mtyGR33jz9Qsep+3DYiH7tTnApj+0gmp0cPVubxTtq+csNDgqTmSocpgF5tzAa5Sg1/pjsa+UQ4sngR5Fz+O7wD8XOnwdOssD/b92Us8aItwDgUrYdrhmGRCAJ/N11eqg7jq262MY/RpwTqb8oKvTntmBKQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Artem Mygaiev <artem_mygaiev@xxxxxxxx>
  • Delivery-date: Fri, 07 Jul 2023 11:34:46 +0000
  • Ironport-data: A9a23:QLX7L6xdJKTzeIJ3apx6t+eHxyrEfRIJ4+MujC+fZmUNrF6WrkUDx zcaD2uGafuNZjD9KY0jO4m28E1UvZHVzt41G1M5qSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult Mj75sbSIzdJ4RYtWo4vw/zF8EoHUMja4mtC5QRhPa8T5TcyqlFOZH4hDfDpR5fHatE88t6SH 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXtK0 f0kORJcVTKK1vu5h7iRc+NXh9t2eaEHPKtH0p1h5RfwKK9+BLX8GeDN79Ie2yosjMdTG/qYf 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjWVlVQouFTuGIO9ltiiX8Jak1zev mvb12/4HgsbJJqUzj/tHneE37aSxn6iBNNPfFG+3sJskkWy/GlUMiJMT0GKhPyJlROiet0Kf iT4/QJr98De7neDTMT5XhC+iG6JuFgbQdU4O/Ym5R6E0LaS4wedCmUOVDdHZPQvscNwTjsvv neZktWsCTFxvbm9TXOG6qzSvT60ITISL2IJeWkDVwRty8L4vIg5gxbLT9BiOK24lNv4HXf32 T/ihCIznakJhMgHkaCy50nagimEr4LMCAUy423/YGWh6Q9oYZ+/UKah41Pb8PVoIZ6QSx+Ku 31ss8+a4eMVBJeBjhuRUf4NF7Gk4fWCGDDEiFspFJ4knxyk4WKueLdV8T53JUp3GssccDqva 0jW0T69/7dWNXquKKpoOYS4Dp1yybC6TIy8EPfJctBJf559Mhed+z1jblKR2Garl1UwlaY4O tGQdsPE4WsmNJmLBQGeH481uYLHDAhkrY8PbfgXFyia7Ic=
  • Ironport-hdrordr: A9a23:BAokvakdxeReARADmtFixMFfKNnpDfIQ3DAbv31ZSRFFG/Fw8P re5cjztCWE7gr5PUtKpTnuAsa9qB/nm6KdgrNhXotKPjOGhILAFugLh+aP/9SKIU3DH4BmpM NdWpk7JNrsDUVryebWiTPIdOrIGeP3kpxAU92uqktQcQ==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Jul 07, 2023 at 12:16:46PM +0100, Julien Grall wrote:
> 
> 
> On 07/07/2023 11:47, Roger Pau Monné wrote:
> > On Fri, Jul 07, 2023 at 11:33:14AM +0100, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 07/07/2023 11:06, Roger Pau Monné wrote:
> > > > On Fri, Jul 07, 2023 at 10:00:51AM +0100, Julien Grall wrote:
> > > > > On 07/07/2023 02:47, Stewart Hildebrand wrote:
> > > > > > Note that CONFIG_HAS_VPCI_GUEST_SUPPORT is not currently used in 
> > > > > > the upstream
> > > > > > code base. It will be used by the vPCI series [1]. This patch is 
> > > > > > intended to be
> > > > > > merged as part of the vPCI series.
> > > > > > 
> > > > > > v1->v2:
> > > > > > * new patch
> > > > > > ---
> > > > > >     xen/arch/arm/Kconfig              | 1 +
> > > > > >     xen/arch/arm/include/asm/domain.h | 2 +-
> > > > > >     2 files changed, 2 insertions(+), 1 deletion(-)
> > > > > > 
> > > > > > diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> > > > > > index 4e0cc421ad48..75dfa2f5a82d 100644
> > > > > > --- a/xen/arch/arm/Kconfig
> > > > > > +++ b/xen/arch/arm/Kconfig
> > > > > > @@ -195,6 +195,7 @@ config PCI_PASSTHROUGH
> > > > > >             depends on ARM_64
> > > > > >             select HAS_PCI
> > > > > >             select HAS_VPCI
> > > > > > +   select HAS_VPCI_GUEST_SUPPORT
> > > > > >             default n
> > > > > >             help
> > > > > >               This option enables PCI device passthrough
> > > > > > diff --git a/xen/arch/arm/include/asm/domain.h 
> > > > > > b/xen/arch/arm/include/asm/domain.h
> > > > > > index 1a13965a26b8..6e016b00bae1 100644
> > > > > > --- a/xen/arch/arm/include/asm/domain.h
> > > > > > +++ b/xen/arch/arm/include/asm/domain.h
> > > > > > @@ -298,7 +298,7 @@ static inline void arch_vcpu_block(struct vcpu 
> > > > > > *v) {}
> > > > > >     #define arch_vm_assist_valid_mask(d) (1UL << 
> > > > > > VMASST_TYPE_runstate_update_flag)
> > > > > > -#define has_vpci(d) ({ IS_ENABLED(CONFIG_HAS_VPCI) && 
> > > > > > is_hardware_domain(d); })
> > > > > > +#define has_vpci(d)    ({ (void)(d); IS_ENABLED(CONFIG_HAS_VPCI); 
> > > > > > })
> > > > > 
> > > > > As I mentioned in the previous patch, wouldn't this enable vPCI
> > > > > unconditionally for all the domain? Shouldn't this be instead an 
> > > > > optional
> > > > > feature which would be selected by the toolstack?
> > > > 
> > > > I do think so, at least on x86 we signal whether vPCI should be
> > > > enabled for a domain using xen_arch_domainconfig at domain creation.
> > > > 
> > > > Ideally we would like to do this on a per-device basis for domUs, so
> > > > we should consider adding a new flag to xen_domctl_assign_device in
> > > > order to signal whether the assigned device should use vPCI.
> > > 
> > > I am a bit confused with this paragraph. If the device is not using vPCI,
> > > how will it be exposed to the domain? Are you planning to support both 
> > > vPCI
> > > and PV PCI passthrough for a same domain?
> > 
> > You could have an external device model handling it using the ioreq
> > interface, like we currently do passthrough for HVM guests.
> 
> IMHO, if one decide to use QEMU for emulating the host bridge, then there is
> limited point to also ask Xen to emulate the hostbridge for some other
> device. So what would be the use case where you would want to be a
> per-device basis decision?

You could also emulate the bridge in Xen and then have QEMU and
vPCI handle accesses to the PCI config space for different devices.
The ioreq interface already allows registering for config space
accesses on a per SBDF basis.

XenServer currently has a use-case where generic PCI device
passthrough is handled by QEMU, while some GPUs are passed through
using a custom emulator.  So some domains effectively end with a QEMU
instance and a custom emulator, I don't see why you couldn't
technically replace QEMU with vPCI in this scenario.

The PCI root complex might be emulated by QEMU, or ideally by Xen.
That shouldn't prevent other device models from handling accesses for
devices, as long as accesses to the ECAM region(s) are trapped and
decoded by Xen.  IOW: if we want bridges to be emulated by ioreq
servers we need to introduce an hypercall to register ECAM regions
with Xen so that it can decode accesses and forward them
appropriately.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.