[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen/virtio: Avoid use of the dom0 backend in dom0



On 07.07.23 16:48, Roger Pau Monné wrote:
On Fri, Jul 07, 2023 at 04:27:59PM +0200, Juergen Gross wrote:
On 07.07.23 16:10, Juergen Gross wrote:
On 07.07.23 11:50, Roger Pau Monné wrote:
On Fri, Jul 07, 2023 at 06:38:48AM +0200, Juergen Gross wrote:
On 06.07.23 23:49, Stefano Stabellini wrote:
On Thu, 6 Jul 2023, Roger Pau Monné wrote:
On Wed, Jul 05, 2023 at 03:41:10PM -0700, Stefano Stabellini wrote:
On Wed, 5 Jul 2023, Roger Pau Monné wrote:
On Tue, Jul 04, 2023 at 08:14:59PM +0300, Oleksandr Tyshchenko wrote:
Part 2 (clarification):

I think using a special config space register in the root complex would
not be terrible in terms of guest changes because it is easy to
introduce a new root complex driver in Linux and other OSes. The root
complex would still be ECAM compatible so the regular ECAM driver would
still work. A new driver would only be necessary if you want to be able
to access the special config space register.

I'm slightly worry of this approach, we end up modifying a root
complex emulation in order to avoid modifying a PCI device emulation
on QEMU, not sure that's a good trade off.

Note also that different architectures will likely have different root
complex, and so you might need to modify several of them, plus then
arrange the PCI layout correctly in order to have the proper hierarchy
so that devices belonging to different driver domains are assigned to
different bridges.

I do think that adding something to the PCI conf register somewhere is
the best option because it is not dependent on ACPI and it is not
dependent on xenstore both of which are very undesirable.

I am not sure where specifically is the best place. These are 3 ideas
we came up with:
1. PCI root complex
2. a register on the device itself
3. a new capability of the device
4. add one extra dummy PCI device for the sole purpose of exposing the
      grants capability


Looking at the spec, there is a way to add a vendor-specific capability
(cap_vndr = 0x9). Could we use that? It doesn't look like it is used
today, Linux doesn't parse it.

I did wonder the same from a quick look at the spec.  There's however
a text in the specification that says:

"The driver SHOULD NOT use the Vendor data capability except for
debugging and reporting purposes."

So we would at least need to change that because the capability would
then be used by other purposes different than debugging and reporting.

Seems like a minor adjustment, so might we worth asking upstream about
their opinion, and to get a conversation started.

Wait, wouldn't this use-case fall under "reporting" ? It is exactly what
we are doing, right?

I'd understand "reporting" as e.g. logging, transferring statistics, ...

We'd like to use it for configuration purposes.

I've also read it that way.

Another idea would be to enhance the virtio IOMMU device to suit our needs:
we could add the domid as another virtio IOMMU device capability and (for now)
use bypass mode for all "productive" devices.

If we have to start adding capabilties, won't it be easier to just add
it to the each device instead of adding it to virtio IOMMU.  Or is the
parsing of capabilities device specific, and hence we would have to
implement such parsing for each device?  I would expect some
capabilities are shared between all devices, and a Xen capability could
be one of those.

Have a look at [1], which is describing the common device config layout.
The problem here is that we'd need to add the domid after the queue specific
data, resulting in a mess if further queue fields would be added later.

We could try that, of course.

Thinking more about it, the virtio IOMMU device seems to be a better fit:

In case we'd add the domid to the device's PCI config space, the value would
be controlled by the backend domain. IMO the domid passed to the frontend
should be controlled by a trusted entity (dom0 or the hypervisor), which
would be the natural backend of the virtio IOMMU device.

Hm, yes.  I'm however failing to see how a backed could exploit that.

The guest would be granting memory to a different domain than the one
running the backend, but otherwise that memory would be granted to the
backend domain, which could then also make it available to other
domains (without having to play with the reported backend domid).

I agree that an exploit is at least not obvious.

It is still not a clean solution, though.

Giving the wrong domain direct access to some of the guest's memory is worse
than the ability to pass the contents indirectly to the wrong domain IMHO.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.