[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen/virtio: Avoid use of the dom0 backend in dom0


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 7 Jul 2023 16:48:44 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=p2tsQNXpWd5IU/s4iz6MwpCQnaYnQ6BMZkEvquP1dN4=; b=LiYm5OQcPAlI5IkjEXozl1o2EC4RzrCgVA/0CgcXzoLND/OJpnNSgFF0Ie9PU0PT9W0y4L6HxGJ5dY5GLp+U9vNSbOnkMIxn+UCiwbxz5ykBTxiGR55BjnwFAGI2AfpscyA0l1beokcxwAVusdrSeyN1OJ0PcyMCC7zuMfnaoNo8nR8P2hZehojIopTXxzeY7hfUdTb6ngkybq5hM1Y/MajoNkD4friPckpjTzol+5g4bfwHf+rlBF0PTx1M/n76ygPdxrCxHWkjQz8eztr+C382edLBoslG1FcV2OfDXjMqWeBK9ekpCNA8nzIL0UZTQHxosPmnOoHjUZNbcMonBg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D8Eq2Xk46k/1xZ91oVlblDkNnTIe2edzYsHk8CykfhT2Y1RRF1e189OA1pbxfc+uX6wf7TaZjdgDV91Qk9ty2G4OI92IgWAQnzHzasR8k4DbGeP4mFQBhGZlDlwpJ1ShHuqSSgg+lciVTFuMKVKX9PwgeiAB050ExE3gUCb7FqtdEfFwubZ1QcZabiCX0iy0yBUcIo2t+ihQRz6k1+BQ/jf1P6eGrlHwPP8vfpFBGu+w1W3MQTpIpFB96kmn8iMtKibybAlkLX1GbPEpDK7QGhyMLBJttWMnPhI0UdwMOEH9hjsaSssKR46M74Xxk+0L/U2I+yaU8m+WjKAPUUVqYQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Petr Pavlu <petr.pavlu@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, vikram.garhwal@xxxxxxx
  • Delivery-date: Fri, 07 Jul 2023 14:49:08 +0000
  • Ironport-data: A9a23:mvO33KCzbMtJRxVW/2Djw5YqxClBgxIJ4kV8jS/XYbTApDomgTcEm mQdCG/XaP6OMGP8KNAlbY6w/UoC78fRm9ZhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs t7pyyHlEAbNNwVcbCRMsspvlDs15K6p4GxA4QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw9dk0OXkQ8 fIhCzUpXhqg1+24mvGFRbw57igjBJGD0II3nFhFlWucNtB/BJfJTuPN+MNS2yo2ioZWB/HCa sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxruQA/zyQouFTpGMDSddGQA91cg26Tp 37c/nS/CRYfXDCa4WPcoiry2r6Tx0sXXqo4EbmVzvl03GeR33cVWRwqBEWnr6mm3xvWt9V3b hZ8FjAVhao97kuwVfHmQga15nWDu3Y0QMFMGuc37AWMzKv84AuDAGUACDlbZ7QOs8s7Ric2x 0SJt9zsDD1r9raSTBq1876OqDqoNCs9LGkcZDQFRw8I/9nipo4oihvFCN1kFcadqdn4Gir5x TyQmxQvnLUYjcMN1KKT8EjOhnSnoZ2hZhY4+h+RRmu76h5Rf5O+asqj7l2zxeZNKsOVQ0eMu FAAmtOC96YeAJeVjiuPTe4RWraz6J6tNDzanE4qHJQ78Tmp02CscJoW4zxkIkptdMEedlfBf k7QowpUopBaJnu1ZKtfaoe9Tc8tyMDd+c/NU/nVap9VZMF3fQrfpiV2PxbMhSbqjVQmlrw5N dGDa8GwAH0GCKNhij2rW+Ma1rxtzSc7rY/Oea3GI92c+eL2TBaopX0tajNisshRAHu4nTjo
  • Ironport-hdrordr: A9a23:Uj1sPaN9VxVxi8BcTjGjsMiBIKoaSvp037BK7S1MoH1uA6ilfq WV9sjzuiWatN98Yh8dcLO7Scy9qBHnhP1ICOAqVN/PYOCBggqVxelZhrcKqAeQeREWmNQ86U 4aSdkYNDXxZ2IK8foT4mODYqkdKA/sytHXuQ/cpU0dPD2Dc8tbnmFE4p7wKDwNeOFBb6BJba a01458iBeLX28YVci/DmltZZm/mzWa/KiWGSLvHnQcmXKzsQ8=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Jul 07, 2023 at 04:27:59PM +0200, Juergen Gross wrote:
> On 07.07.23 16:10, Juergen Gross wrote:
> > On 07.07.23 11:50, Roger Pau Monné wrote:
> > > On Fri, Jul 07, 2023 at 06:38:48AM +0200, Juergen Gross wrote:
> > > > On 06.07.23 23:49, Stefano Stabellini wrote:
> > > > > On Thu, 6 Jul 2023, Roger Pau Monné wrote:
> > > > > > On Wed, Jul 05, 2023 at 03:41:10PM -0700, Stefano Stabellini wrote:
> > > > > > > On Wed, 5 Jul 2023, Roger Pau Monné wrote:
> > > > > > > > On Tue, Jul 04, 2023 at 08:14:59PM +0300, Oleksandr Tyshchenko 
> > > > > > > > wrote:
> > > > > > > > > Part 2 (clarification):
> > > > > > > > > 
> > > > > > > > > I think using a special config space register in the root 
> > > > > > > > > complex would
> > > > > > > > > not be terrible in terms of guest changes because it is easy 
> > > > > > > > > to
> > > > > > > > > introduce a new root complex driver in Linux and other OSes. 
> > > > > > > > > The root
> > > > > > > > > complex would still be ECAM compatible so the regular ECAM 
> > > > > > > > > driver would
> > > > > > > > > still work. A new driver would only be necessary if you want 
> > > > > > > > > to be able
> > > > > > > > > to access the special config space register.
> > > > > > > > 
> > > > > > > > I'm slightly worry of this approach, we end up modifying a root
> > > > > > > > complex emulation in order to avoid modifying a PCI device 
> > > > > > > > emulation
> > > > > > > > on QEMU, not sure that's a good trade off.
> > > > > > > > 
> > > > > > > > Note also that different architectures will likely have 
> > > > > > > > different root
> > > > > > > > complex, and so you might need to modify several of them, plus 
> > > > > > > > then
> > > > > > > > arrange the PCI layout correctly in order to have the proper 
> > > > > > > > hierarchy
> > > > > > > > so that devices belonging to different driver domains are 
> > > > > > > > assigned to
> > > > > > > > different bridges.
> > > > > > > 
> > > > > > > I do think that adding something to the PCI conf register 
> > > > > > > somewhere is
> > > > > > > the best option because it is not dependent on ACPI and it is not
> > > > > > > dependent on xenstore both of which are very undesirable.
> > > > > > > 
> > > > > > > I am not sure where specifically is the best place. These are 3 
> > > > > > > ideas
> > > > > > > we came up with:
> > > > > > > 1. PCI root complex
> > > > > > > 2. a register on the device itself
> > > > > > > 3. a new capability of the device
> > > > > > > 4. add one extra dummy PCI device for the sole purpose of 
> > > > > > > exposing the
> > > > > > >      grants capability
> > > > > > > 
> > > > > > > 
> > > > > > > Looking at the spec, there is a way to add a vendor-specific 
> > > > > > > capability
> > > > > > > (cap_vndr = 0x9). Could we use that? It doesn't look like it is 
> > > > > > > used
> > > > > > > today, Linux doesn't parse it.
> > > > > > 
> > > > > > I did wonder the same from a quick look at the spec.  There's 
> > > > > > however
> > > > > > a text in the specification that says:
> > > > > > 
> > > > > > "The driver SHOULD NOT use the Vendor data capability except for
> > > > > > debugging and reporting purposes."
> > > > > > 
> > > > > > So we would at least need to change that because the capability 
> > > > > > would
> > > > > > then be used by other purposes different than debugging and 
> > > > > > reporting.
> > > > > > 
> > > > > > Seems like a minor adjustment, so might we worth asking upstream 
> > > > > > about
> > > > > > their opinion, and to get a conversation started.
> > > > > 
> > > > > Wait, wouldn't this use-case fall under "reporting" ? It is exactly 
> > > > > what
> > > > > we are doing, right?
> > > > 
> > > > I'd understand "reporting" as e.g. logging, transferring statistics, ...
> > > > 
> > > > We'd like to use it for configuration purposes.
> > > 
> > > I've also read it that way.
> > > 
> > > > Another idea would be to enhance the virtio IOMMU device to suit our 
> > > > needs:
> > > > we could add the domid as another virtio IOMMU device capability and 
> > > > (for now)
> > > > use bypass mode for all "productive" devices.
> > > 
> > > If we have to start adding capabilties, won't it be easier to just add
> > > it to the each device instead of adding it to virtio IOMMU.  Or is the
> > > parsing of capabilities device specific, and hence we would have to
> > > implement such parsing for each device?  I would expect some
> > > capabilities are shared between all devices, and a Xen capability could
> > > be one of those.
> > 
> > Have a look at [1], which is describing the common device config layout.
> > The problem here is that we'd need to add the domid after the queue specific
> > data, resulting in a mess if further queue fields would be added later.
> > 
> > We could try that, of course.
> 
> Thinking more about it, the virtio IOMMU device seems to be a better fit:
> 
> In case we'd add the domid to the device's PCI config space, the value would
> be controlled by the backend domain. IMO the domid passed to the frontend
> should be controlled by a trusted entity (dom0 or the hypervisor), which
> would be the natural backend of the virtio IOMMU device.

Hm, yes.  I'm however failing to see how a backed could exploit that.

The guest would be granting memory to a different domain than the one
running the backend, but otherwise that memory would be granted to the
backend domain, which could then also make it available to other
domains (without having to play with the reported backend domid).

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.