[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/3] [FUTURE] xen/arm: enable vPCI for domUs


  • To: Julien Grall <julien@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 7 Jul 2023 15:13:08 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RQ+o5mY7M1XiBRfnC5QMqr9tUX6O/qoN4fcud89xSAA=; b=li/5eqkEqwKqKomiaiLz92u5amA6ru6V1ST+/5wjpmkOv++BNZbXXSgr2F2ZkompG0MR9yIBqBJH/COCaLdLycKsghuYaHqO1czvGWzmm9nYJfn+MHJNUqFFNz+UaCxL68eUtt5KjdZCA9MbokYRnBhKixK+sxiGZBUH0REYWxUTNOlUiptCIwMLvFUTy6+LfxoXmuFkdWUiYV/JAJ0DEplPyq/IVXL2QpNyNs7rA8kbK+8Hwo0tJziHP1mZtXFmWnPxEygVgdh88FIDK2YDShXqd0kU2wEQTkBWtLUJ7oK+4ojlCKWn7IST6srE5QZq2DbnuXxYWDuK6V4KwRAUDg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UWogiS5v1/JTZfk+z1LRq6euaJeSOID+GcLRn/5fHHAziIMdg/mzYkpc0vMc1GtrjvRW4zAi4BDuuWRcID2/q9QkRacQiAxRk1h3AZf27ABRiTDwFi93Y3/id0feTN1QaHei2m/Vbk4RbmQw+Sq9om4irBBMc+4RTHQcEAGjjPE8ERHMDZlztPqLLQrTq67B6gcrWARKHw8Jj1Iqs+KmuxQyX+o8mIJXgYvqakAlr4BzJKEHrD5vs+njvn5QHct+GFF4qT3XNiBSVh13gm+DmJ5WBJ75xoWK+hf8UV3oWQ5OYtfyHtltuBwTMLvrvzPr2bxAh/RrRPhQhq0CaPDC5w==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Artem Mygaiev <artem_mygaiev@xxxxxxxx>
  • Delivery-date: Fri, 07 Jul 2023 13:13:41 +0000
  • Ironport-data: A9a23:cbo+bavsjgjX3JuIZaFfYhjHOOfnVOZfMUV32f8akzHdYApBsoF/q tZmKWmObvuLM2L2L4gjPN+/p0sBvpSEnYNrSAs5/Hw2RS0X+JbJXdiXEBz9bniYRiHhoOCLz O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Vv0gnRkPaoQ5ACGyCFMZH4iDfrZw0XQE9E88tGSH 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG fMwBGtUNwCZu+6K5omcQ8VpoPgPfNfHM9ZK0p1g5Wmx4fcOZ7nmGv+PyfoGmTA6i4ZJAOrUY NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgf60bou9lt+iHK25mm6Co W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAd5NROfmqqICbFu73DwJCEBODV6AheSUthSvAY5vJ ksu5X97xUQ13AnxJjXnZDW6vXqFsxg0S9dWVeog52ml0bbZ/A+DGi4ETzpNZdY8vck6bTUv2 hmCmNaBLSxitviZRGyQ8p+QrCiuIm4FIGkafygGQAAZpd75r+kblQnTR9xuFKq0iNzdGjzqx T2O6i8kiN07h8MRy7+y+1yBhju2v4XIVSY8/ACRVWWghitHY4qia52t+ELsx/9KJ4aETXGMp HEB3cOZ6YgmCpWAlzeERukXK624/PaOMDDagllHEoEo8nKm/HvLVYlK/Dx7E0J4Pc8FdCHBb VfavEVa45o7AZexRap+Yob0B8F0y6HlTI7hTqqNMIsIZYVtfgia+i0ofVSXw23mjEkrl+c4J IufdsGvS30dDMyL0QaLegvU6pdzrghW+I8ZbcmTI8iPuVZGWEOodA==
  • Ironport-hdrordr: A9a23:+sM0XKqQH5FwFIleG7PXYdQaV5r7eYIsimQD101hICG9Evbzqy nOpoV46faaslossR0b9uxoW5PwIk80l6QV3WB5B97LMTUO0FHCEGgI1+vfKlPbdREWjdQtsJ uJc8JFeabN5VoRt7eB3OFveexQveVu88qT9JvjJ28Gd3APV0n5hT0JcjpyFCdNNW577cpQLu v72iJfzQDQAEgqUg==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Jul 07, 2023 at 01:09:40PM +0100, Julien Grall wrote:
> Hi,
> 
> On 07/07/2023 12:34, Roger Pau Monné wrote:
> > On Fri, Jul 07, 2023 at 12:16:46PM +0100, Julien Grall wrote:
> > > 
> > > 
> > > On 07/07/2023 11:47, Roger Pau Monné wrote:
> > > > On Fri, Jul 07, 2023 at 11:33:14AM +0100, Julien Grall wrote:
> > > > > Hi,
> > > > > 
> > > > > On 07/07/2023 11:06, Roger Pau Monné wrote:
> > > > > > On Fri, Jul 07, 2023 at 10:00:51AM +0100, Julien Grall wrote:
> > > > > > > On 07/07/2023 02:47, Stewart Hildebrand wrote:
> > > > > > > > Note that CONFIG_HAS_VPCI_GUEST_SUPPORT is not currently used 
> > > > > > > > in the upstream
> > > > > > > > code base. It will be used by the vPCI series [1]. This patch 
> > > > > > > > is intended to be
> > > > > > > > merged as part of the vPCI series.
> > > > > > > > 
> > > > > > > > v1->v2:
> > > > > > > > * new patch
> > > > > > > > ---
> > > > > > > >      xen/arch/arm/Kconfig              | 1 +
> > > > > > > >      xen/arch/arm/include/asm/domain.h | 2 +-
> > > > > > > >      2 files changed, 2 insertions(+), 1 deletion(-)
> > > > > > > > 
> > > > > > > > diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> > > > > > > > index 4e0cc421ad48..75dfa2f5a82d 100644
> > > > > > > > --- a/xen/arch/arm/Kconfig
> > > > > > > > +++ b/xen/arch/arm/Kconfig
> > > > > > > > @@ -195,6 +195,7 @@ config PCI_PASSTHROUGH
> > > > > > > >         depends on ARM_64
> > > > > > > >         select HAS_PCI
> > > > > > > >         select HAS_VPCI
> > > > > > > > +       select HAS_VPCI_GUEST_SUPPORT
> > > > > > > >         default n
> > > > > > > >         help
> > > > > > > >           This option enables PCI device passthrough
> > > > > > > > diff --git a/xen/arch/arm/include/asm/domain.h 
> > > > > > > > b/xen/arch/arm/include/asm/domain.h
> > > > > > > > index 1a13965a26b8..6e016b00bae1 100644
> > > > > > > > --- a/xen/arch/arm/include/asm/domain.h
> > > > > > > > +++ b/xen/arch/arm/include/asm/domain.h
> > > > > > > > @@ -298,7 +298,7 @@ static inline void arch_vcpu_block(struct 
> > > > > > > > vcpu *v) {}
> > > > > > > >      #define arch_vm_assist_valid_mask(d) (1UL << 
> > > > > > > > VMASST_TYPE_runstate_update_flag)
> > > > > > > > -#define has_vpci(d) ({ IS_ENABLED(CONFIG_HAS_VPCI) && 
> > > > > > > > is_hardware_domain(d); })
> > > > > > > > +#define has_vpci(d)    ({ (void)(d); 
> > > > > > > > IS_ENABLED(CONFIG_HAS_VPCI); })
> > > > > > > 
> > > > > > > As I mentioned in the previous patch, wouldn't this enable vPCI
> > > > > > > unconditionally for all the domain? Shouldn't this be instead an 
> > > > > > > optional
> > > > > > > feature which would be selected by the toolstack?
> > > > > > 
> > > > > > I do think so, at least on x86 we signal whether vPCI should be
> > > > > > enabled for a domain using xen_arch_domainconfig at domain creation.
> > > > > > 
> > > > > > Ideally we would like to do this on a per-device basis for domUs, so
> > > > > > we should consider adding a new flag to xen_domctl_assign_device in
> > > > > > order to signal whether the assigned device should use vPCI.
> > > > > 
> > > > > I am a bit confused with this paragraph. If the device is not using 
> > > > > vPCI,
> > > > > how will it be exposed to the domain? Are you planning to support 
> > > > > both vPCI
> > > > > and PV PCI passthrough for a same domain?
> > > > 
> > > > You could have an external device model handling it using the ioreq
> > > > interface, like we currently do passthrough for HVM guests.
> > > 
> > > IMHO, if one decide to use QEMU for emulating the host bridge, then there 
> > > is
> > > limited point to also ask Xen to emulate the hostbridge for some other
> > > device. So what would be the use case where you would want to be a
> > > per-device basis decision?
> > 
> > You could also emulate the bridge in Xen and then have QEMU and
> > vPCI handle accesses to the PCI config space for different devices.
> > The ioreq interface already allows registering for config space
> > accesses on a per SBDF basis.
> > 
> > XenServer currently has a use-case where generic PCI device
> > passthrough is handled by QEMU, while some GPUs are passed through
> > using a custom emulator.  So some domains effectively end with a QEMU
> > instance and a custom emulator, I don't see why you couldn't
> > technically replace QEMU with vPCI in this scenario.
> > 
> > The PCI root complex might be emulated by QEMU, or ideally by Xen.
> > That shouldn't prevent other device models from handling accesses for
> > devices, as long as accesses to the ECAM region(s) are trapped and
> > decoded by Xen.  IOW: if we want bridges to be emulated by ioreq
> > servers we need to introduce an hypercall to register ECAM regions
> > with Xen so that it can decode accesses and forward them
> > appropriately.
> 
> Thanks for the clarification. Going back to the original discussion. Even
> with this setup, I think we still need to tell at domain creation whether
> vPCI will be used (think PCI hotplug).

Well, for PCI hotplug you will still need to execute a
XEN_DOMCTL_assign_device hypercall in order to assign the device, at
which point you could pass the vPCI flag.

What you likely want at domain create is whether the IOMMU should be
enabled or not, as we no longer allow late enabling of the IOMMU once
the domain has been created.

One question I have is whether Arm plans to allow exposing fully
emulated devices on the PCI config space, or that would be limited to
PCI device passthrough?

IOW: should an emulated PCI root complex be unconditionally exposed to
guests so that random ioreq servers can register for SBDF slots?

> After that, the device assignment hypercall could have a way to say whether
> the device will be emulated by vPCI. But I don't think this is necessary to
> have from day one as the ABI will be not stable (this is a DOMCTL).

Indeed, it's not a stable interface, but we might as well get
something sane if we have to plumb it through the tools.  Either if
it's a domain create flag or a device attach flag you will need some
plumbing to do at the toolstack level, at which point we might as well
use an interface that doesn't have arbitrary limits.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.