[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen/virtio: Avoid use of the dom0 backend in dom0


  • To: Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 5 Jul 2023 10:32:34 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7xo8LwWJoLmfNQ+YRzQ1DOI4zaYY9XdJ9XZv0GQRJN0=; b=NDACbzQ6j3y3c2NChiI/N4EKbH2W0JzaFe9adijpiJvaumWG9CoDVFK6x2MakXdrQlvRIA2PSK2JcTD0rA78MoF0YMkowx+RkiHc+Y1GJAXQmgYb31UmEQGH7cgpHl2ulh4+uyWr6WOki8bM5iiR9fdtTSP2EKpND0b6VCMQksj0GNaBNrL8F5j58qjAQEQwVF+h0WjMEBwgElgnajqpeY0BbgHh/7VcgHNOmzvahbh5HxvQUBuxmf/2UtFG71mDEaSsEJCBeM1ajzP/lhgAd3sV7Pui5V1IPUzlkOQxv5R+IijQiOq7GmL14r+v2rVUAi04rYSXJp+8VBY5BjOBOw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NKjjxcFSEMMUl5J+9evJXIErkfoG5bec5ITLaGfD31vYS9liOHJLolrCXonTAur09sdm9Xq/lQhlNxiwvfSSTvZU96902Feh/9F4tFDKl2xkpfOJgirT+7RSHJBfqecI2CRwwhX+pu6/87O4p1xzqH9mmROFAj6ChI1/Zz7r71xLyxpKG2ST8znaz0hLkJ6hfP3ORVWYoeiMetNW7CSPt+xomdeMa8upKIhB2iSowR0YEHHQG21/EhUTGoi+bdUaKyxfEAyHviuL1NTEjkGwi5FabCJCmsFZaFSGh5EXxnJyXJuY1v28g4q92zjvv91DwgK98JpEwFLttk6ktMYmxQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Petr Pavlu <petr.pavlu@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, vikram.garhwal@xxxxxxx
  • Delivery-date: Wed, 05 Jul 2023 08:33:00 +0000
  • Ironport-data: A9a23:MQchhamsk9OsANAhDt+H3pfo5gzJJkRdPkR7XQ2eYbSJt1+Wr1Gzt xIdD2nVO/vbYTb9ctsgbo7ipEwPsMSBmIdjTgNk+y9mQyMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE0p5K2aVA8w5ARkPqgU5AKGzhH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 d0nDxsGQyigvL67y+y7E7RAvd58d8a+aevzulk4pd3YJdAPZMmZBonvu5pf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkV03ieeyWDbWUoXiqcF9hEGXq 3iA523kKhobKMae2XyO9XfEaurnxHqnBdpJTODhnhJsqFqKwDRDIhw9b0Omgvy01HeOd4hDA VNBr0LCqoB3riRHVOLVWBm1o2WYrwUcc9VVGuw+rgqKz8L8+B2FD2IJSjpAbt0Ot8IsQzEuk FiTkLvBBzN1t6aOYWmA7brSpjS3UQAQJHUHbDUJTiME5cfiu4A5ih/TTtdlH7Wxh9ezEjb1q xitqCU9nLwVgdQ867Sg/VvHjjSvobDEVgcwoA7QWwqN9g5lfsi9bpKs9HDA8O1Nao2eSzGpr HUC3sST8u0KJZWMjzCWBvUAGqmz4PSIOyGahkRgd7El9jKw6zugcJpW7TVWOkhkKIAHdCXvb UuVvhlejLdNPXiwZKoxbIurC9sjyYDpENijXffRBueiebB0fQ6DuS1rO0iZ2jm3lFB2yP5gf 5CGbcyrEHAWT7x9yya7TPsc1rltwT0iwWTURtbwyBHPPaeiWUN5gIwtaDOmBt3VJovdyOkJ2 76z7/e39ig=
  • Ironport-hdrordr: A9a23:rkZru6oDgcM1Is04Yx5I55kaV5oleYIsimQD101hICG9E/b1qy nKpp8mPHDP5wr5NEtPpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qG/xTQXwH46+5Bxe NBXsFFebnN5IFB/KTH3DU=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jul 04, 2023 at 08:14:59PM +0300, Oleksandr Tyshchenko wrote:
> On Tue, Jul 4, 2023 at 5:49 PM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> 
> Hello all.
> 
> [sorry for the possible format issues]
> 
> 
> On Tue, Jul 04, 2023 at 01:43:46PM +0200, Marek Marczykowski-Górecki wrote:
> > > Hi,
> > >
> > > FWIW, I have ran into this issue some time ago too. I run Xen on top of
> > > KVM and then passthrough some of the virtio devices (network one
> > > specifically) into a (PV) guest. So, I hit both cases, the dom0 one and
> > > domU one. As a temporary workaround I needed to disable
> > > CONFIG_XEN_VIRTIO completely (just disabling
> > > CONFIG_XEN_VIRTIO_FORCE_GRANT was not enough to fix it).
> > > With that context in place, the actual response below.
> > >
> > > On Tue, Jul 04, 2023 at 12:39:40PM +0200, Juergen Gross wrote:
> > > > On 04.07.23 09:48, Roger Pau Monné wrote:
> > > > > On Thu, Jun 29, 2023 at 03:44:04PM -0700, Stefano Stabellini wrote:
> > > > > > On Thu, 29 Jun 2023, Oleksandr Tyshchenko wrote:
> > > > > > > On 29.06.23 04:00, Stefano Stabellini wrote:
> > > > > > > > I think we need to add a second way? It could be anything that
> > can help
> > > > > > > > us distinguish between a non-grants-capable virtio backend and
> > a
> > > > > > > > grants-capable virtio backend, such as:
> > > > > > > > - a string on xenstore
> > > > > > > > - a xen param
> > > > > > > > - a special PCI configuration register value
> > > > > > > > - something in the ACPI tables
> > > > > > > > - the QEMU machine type
> > > > > > >
> > > > > > >
> > > > > > > Yes, I remember there was a discussion regarding that. The point
> > is to
> > > > > > > choose a solution to be functional for both PV and HVM *and* to
> > be able
> > > > > > > to support a hotplug. IIRC, the xenstore could be a possible
> > candidate.
> > > > > >
> > > > > > xenstore would be among the easiest to make work. The only
> > downside is
> > > > > > the dependency on xenstore which otherwise virtio+grants doesn't
> > have.
> > > > >
> > > > > I would avoid introducing a dependency on xenstore, if nothing else
> > we
> > > > > know it's a performance bottleneck.
> > > > >
> > > > > We would also need to map the virtio device topology into xenstore,
> > so
> > > > > that we can pass different options for each device.
> > > >
> > > > This aspect (different options) is important. How do you want to pass
> > virtio
> > > > device configuration parameters from dom0 to the virtio backend
> > domain? You
> > > > probably need something like Xenstore (a virtio based alternative like
> > virtiofs
> > > > would work, too) for that purpose.
> > > >
> > > > Mapping the topology should be rather easy via the PCI-Id, e.g.:
> > > >
> > > > /local/domain/42/device/virtio/0000:00:1c.0/backend
> > >
> > > While I agree this would probably be the simplest to implement, I don't
> > > like introducing xenstore dependency into virtio frontend either.
> > > Toolstack -> backend communication is probably easier to solve, as it's
> > > much more flexible (could use qemu cmdline, QMP, other similar
> > > mechanisms for non-qemu backends etc).
> >
> > I also think features should be exposed uniformly for devices, it's at
> > least weird to have certain features exposed in the PCI config space
> > while other features exposed in xenstore.
> >
> > For virtio-mmio this might get a bit confusing, are we going to add
> > xenstore entries based on the position of the device config mmio
> > region?
> >
> > I think on Arm PCI enumeration is not (usually?) done by the firmware,
> > at which point the SBDF expected by the tools/backend might be
> > different than the value assigned by the guest OS.
> >
> > I think there are two slightly different issues, one is how to pass
> > information to virtio backends, I think doing this initially based on
> > xenstore is not that bad, because it's an internal detail of the
> > backend implementation. However passing information to virtio
> > frontends using xenstore is IMO a bad idea, there's already a way to
> > negotiate features between virtio frontends and backends, and Xen
> > should just expand and use that.
> >
> >
> 
> On Arm with device-tree we have a special bindings which purpose is to
> inform us whether we need to use grants for virtio and backend domid for a
> particular device.Here on x86, we don't have a device tree, so cannot
> (easily?) reuse this logic.
> 
> I have just recollected one idea suggested by Stefano some time ago [1].
> The context of discussion was about what to do when device-tree and ACPI
> cannot be reused (or something like that).The idea won't cover virtio-mmio,
> but I have heard that virtio-mmio usage with x86 Xen is rather unusual case.
> 
> I will paste the text below for convenience.
> 
> **********
> 
> Part 1 (intro):
> 
> We could reuse a PCI config space register to expose the backend id.
> However this solution requires a backend change (QEMU) to expose the
> backend id via an emulated register for each emulated device.
> 
> To avoid having to introduce a special config space register in all
> emulated PCI devices (virtio-net, virtio-block, etc) I wonder if we
> could add a special PCI config space register at the emulated PCI Root
> Complex level.
> 
> Basically the workflow would be as follow:
> 
> - Linux recognizes the PCI Root Complex as a Xen PCI Root Complex
> - Linux writes to special PCI config space register of the Xen PCI Root
>   Complex the PCI device id (basically the BDF)
> - The Xen PCI Root Complex emulated by Xen answers by writing back to
>   the same location the backend id (domid of the backend)
> - Linux reads back the same PCI config space register of the Xen PCI
>   Root Complex and learn the relevant domid

IMO this seems awfully complex.  I'm not familiar with the VirtIO
spec, but I see there's a Vendor data capability, could we possibly
expose Xen-specific information on that capability?

> Part 2 (clarification):
> 
> I think using a special config space register in the root complex would
> not be terrible in terms of guest changes because it is easy to
> introduce a new root complex driver in Linux and other OSes. The root
> complex would still be ECAM compatible so the regular ECAM driver would
> still work. A new driver would only be necessary if you want to be able
> to access the special config space register.

I'm slightly worry of this approach, we end up modifying a root
complex emulation in order to avoid modifying a PCI device emulation
on QEMU, not sure that's a good trade off.

Note also that different architectures will likely have different root
complex, and so you might need to modify several of them, plus then
arrange the PCI layout correctly in order to have the proper hierarchy
so that devices belonging to different driver domains are assigned to
different bridges.

> 
> 
> **********
> What do you think about it? Are there any pitfalls, etc? This also requires
> system changes, but at least without virtio spec changes.

Why are we so reluctant to add spec changes?  I understand this might
take time an effort, but it's the only way IMO to build a sustainable
VirtIO Xen implementation.  Did we already attempt to negotiate with
Oasis Xen related spec changes and those where refused?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.