[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Mapping non-pinned memory from one Xen domain into another


  • To: Xen developer discussion <xen-devel@xxxxxxxxxxxxxxxxxxxx>, dri-devel@xxxxxxxxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>, Val Packett <val@xxxxxxxxxxxxxxxxxxxxxx>, Ariadne Conill <ariadne@ariadne.space>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • From: Demi Marie Obenour <demiobenour@xxxxxxxxx>
  • Date: Tue, 24 Mar 2026 10:17:02 -0400
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=20251104 header.d=gmail.com header.i="@gmail.com" header.h="Autocrypt:Subject:From:To:Content-Language:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=demiobenour@xxxxxxxxx; keydata= xsFNBFp+A0oBEADffj6anl9/BHhUSxGTICeVl2tob7hPDdhHNgPR4C8xlYt5q49yB+l2nipd aq+4Gk6FZfqC825TKl7eRpUjMriwle4r3R0ydSIGcy4M6eb0IcxmuPYfbWpr/si88QKgyGSV Z7GeNW1UnzTdhYHuFlk8dBSmB1fzhEYEk0RcJqg4AKoq6/3/UorR+FaSuVwT7rqzGrTlscnT DlPWgRzrQ3jssesI7sZLm82E3pJSgaUoCdCOlL7MMPCJwI8JpPlBedRpe9tfVyfu3euTPLPx wcV3L/cfWPGSL4PofBtB8NUU6QwYiQ9Hzx4xOyn67zW73/G0Q2vPPRst8LBDqlxLjbtx/WLR 6h3nBc3eyuZ+q62HS1pJ5EvUT1vjyJ1ySrqtUXWQ4XlZyoEFUfpJxJoN0A9HCxmHGVckzTRl 5FMWo8TCniHynNXsBtDQbabt7aNEOaAJdE7to0AH3T/Bvwzcp0ZJtBk0EM6YeMLtotUut7h2 Bkg1b//r6bTBswMBXVJ5H44Qf0+eKeUg7whSC9qpYOzzrm7+0r9F5u3qF8ZTx55TJc2g656C 9a1P1MYVysLvkLvS4H+crmxA/i08Tc1h+x9RRvqba4lSzZ6/Tmt60DPM5Sc4R0nSm9BBff0N m0bSNRS8InXdO1Aq3362QKX2NOwcL5YaStwODNyZUqF7izjK4QARAQABzTxEZW1pIE1hcmll IE9iZW5vdXIgKGxvdmVyIG9mIGNvZGluZykgPGRlbWlvYmVub3VyQGdtYWlsLmNvbT7CwXgE EwECACIFAlp+A0oCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJELKItV//nCLBhr8Q AK/xrb4wyi71xII2hkFBpT59ObLN+32FQT7R3lbZRjVFjc6yMUjOb1H/hJVxx+yo5gsSj5LS 9AwggioUSrcUKldfA/PKKai2mzTlUDxTcF3vKx6iMXKA6AqwAw4B57ZEJoMM6egm57TV19kz PMc879NV2nc6+elaKl+/kbVeD3qvBuEwsTe2Do3HAAdrfUG/j9erwIk6gha/Hp9yZlCnPTX+ VK+xifQqt8RtMqS5R/S8z0msJMI/ajNU03kFjOpqrYziv6OZLJ5cuKb3bZU5aoaRQRDzkFIR 6aqtFLTohTo20QywXwRa39uFaOT/0YMpNyel0kdOszFOykTEGI2u+kja35g9TkH90kkBTG+a EWttIht0Hy6YFmwjcAxisSakBuHnHuMSOiyRQLu43ej2+mDWgItLZ48Mu0C3IG1seeQDjEYP tqvyZ6bGkf2Vj+L6wLoLLIhRZxQOedqArIk/Sb2SzQYuxN44IDRt+3ZcDqsPppoKcxSyd1Ny 2tpvjYJXlfKmOYLhTWs8nwlAlSHX/c/jz/ywwf7eSvGknToo1Y0VpRtoxMaKW1nvH0OeCSVJ itfRP7YbiRVc2aNqWPCSgtqHAuVraBRbAFLKh9d2rKFB3BmynTUpc1BQLJP8+D5oNyb8Ts4x Xd3iV/uD8JLGJfYZIR7oGWFLP4uZ3tkneDfYzsFNBFp+A0oBEAC9ynZI9LU+uJkMeEJeJyQ/ 8VFkCJQPQZEsIGzOTlPnwvVna0AS86n2Z+rK7R/usYs5iJCZ55/JISWd8xD57ue0eB47bcJv VqGlObI2DEG8TwaW0O0duRhDgzMEL4t1KdRAepIESBEA/iPpI4gfUbVEIEQuqdqQyO4GAe+M kD0Hy5JH/0qgFmbaSegNTdQg5iqYjRZ3ttiswalql1/iSyv1WYeC1OAs+2BLOAT2NEggSiVO txEfgewsQtCWi8H1SoirakIfo45Hz0tk/Ad9ZWh2PvOGt97Ka85o4TLJxgJJqGEnqcFUZnJJ riwoaRIS8N2C8/nEM53jb1sH0gYddMU3QxY7dYNLIUrRKQeNkF30dK7V6JRH7pleRlf+wQcN fRAIUrNlatj9TxwivQrKnC9aIFFHEy/0mAgtrQShcMRmMgVlRoOA5B8RTulRLCmkafvwuhs6 dCxN0GNAORIVVFxjx9Vn7OqYPgwiofZ6SbEl0hgPyWBQvE85klFLZLoj7p+joDY1XNQztmfA rnJ9x+YV4igjWImINAZSlmEcYtd+xy3Li/8oeYDAqrsnrOjb+WvGhCykJk4urBog2LNtcyCj kTs7F+WeXGUo0NDhbd3Z6AyFfqeF7uJ3D5hlpX2nI9no/ugPrrTVoVZAgrrnNz0iZG2DVx46 x913pVKHl5mlYQARAQABwsFfBBgBAgAJBQJafgNKAhsMAAoJELKItV//nCLBwNIP/AiIHE8b oIqReFQyaMzxq6lE4YZCZNj65B/nkDOvodSiwfwjjVVE2V3iEzxMHbgyTCGA67+Bo/d5aQGj gn0TPtsGzelyQHipaUzEyrsceUGWYoKXYyVWKEfyh0cDfnd9diAm3VeNqchtcMpoehETH8fr RHnJdBcjf112PzQSdKC6kqU0Q196c4Vp5HDOQfNiDnTf7gZSj0BraHOByy9LEDCLhQiCmr+2 E0rW4tBtDAn2HkT9uf32ZGqJCn1O+2uVfFhGu6vPE5qkqrbSE8TG+03H8ecU2q50zgHWPdHM OBvy3EhzfAh2VmOSTcRK+tSUe/u3wdLRDPwv/DTzGI36Kgky9MsDC5gpIwNbOJP2G/q1wT1o Gkw4IXfWv2ufWiXqJ+k7HEi2N1sree7Dy9KBCqb+ca1vFhYPDJfhP75I/VnzHVssZ/rYZ9+5 1yDoUABoNdJNSGUYl+Yh9Pw9pE3Kt4EFzUlFZWbE4xKL/NPno+z4J9aWemLLszcYz/u3XnbO vUSQHSrmfOzX3cV4yfmjM5lewgSstoxGyTx2M8enslgdXhPthZlDnTnOT+C+OTsh8+m5tos8 HQjaPM01MKBiAqdPgksm1wu2DrrwUi6ChRVTUBcj6+/9IJ81H2P2gJk3Ls3AVIxIffLoY34E +MYSfkEjBz0E8CLOcAw7JIwAaeBT
  • Delivery-date: Tue, 24 Mar 2026 14:17:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Here is a proposed design document for supporting mapping GPU VRAM
and/or file-backed memory into other domains.  It's not in the form of
a patch because the leading + characters would just make it harder to
read for no particular gain, and because this is still RFC right now.
Once it is ready to merge, I'll send a proper patch.  Nevertheless,
you can consider this to be

Signed-off-by: Demi Marie Obenour <demiobenour@xxxxxxxxx>

This approach is very different from the "frontend-allocates"
approach used elsewhere in Xen.  It is very much Linux-centric,
rather than Xen-centric.  In fact, MMU notifiers were invented for
KVM, and this approach is exactly the same as the one KVM implements.
However, to the best of my understanding, the design described here is
the only viable one.  Linux MM and GPU drivers require it, and changes
to either to relax this requirement will not be accepted upstream.
---
# Memory lending: Mapping pageable memory, such as GPU VRAM, from one Xen 
domain into another

## Background

Some Linux kernel subsystems require full control over certain memory
regions.  This includes the ability to handle page faults from any
entity accessing this memory.  Such entities include not only that
kernel's userspace, but also kernels belonging to other guests.

For instance, GPU drivers reserve the right to migrate data between
VRAM and system RAM at any time.  Furthermore, there is a set of
page tables between the "aperture" (mapped as a PCI BAR) and the
actual VRAM.  This means that the GPU driver can make the memory
temporarily inaccessible to the CPU.  This is in fact _required_
when resizable BAR is not supported, as otherwise there is too much
VRAM to expose it all via a single BAR.

Since the backing storage of this memory must be movable, pinning
it is not supported.  However, the existing grant table interface
requires pinned memory.  Therefore, such memory currently cannot be
shared with another guest.  As a result, implementing virtio-GPU blob
objects is not possible.  Since blob objects are a prerequisite for
both Venus and native contexts, supporting Vulkan via virtio-GPU on
Xen is also impossible.

Direct Access to Differentiated Memory (DAX) also relies on non-pinned
memory.  In the (now rare) case of persistent memory, it is because
the filesystem may need to move data blocks around on disk.  In the
case of virtio-pmem and virtio-fs, it is because page faults on write
operations are used to inform filesystems that they need to write the
data back at some point.  Without these page faults, filesystems will
not write back the data and silent data loss will result.

There are other use-cases for this too.  For instance, virtio-GPU
cross-domain Wayland exposes host shared memory buffers to the guest.
These buffers are mmap()'d file descriptors provided by the Wayland
compositor, and as such are not guaranteed to be anonymous memory.
Using grant tables for such mappings would conflict with the design
of existing virtio-GPU implementations, which assume that GPU VRAM
and shared memory can be handled uniformly.

Additionally, this is needed to support paging guest memory out to the
host's disks.  While this is significantly less efficient than using
an in-guest balloon driver, it has the advantage of not requiring
guest cooperation.  Therefore, it can be useful for situations in
which the performance of a guest is irrelevant, but where saving the
guest isn't appropriate.

## Informing drivers that they must stop using memory: MMU notifiers

Kernel drivers, such as xen_privcmd, in the same domain that has
the GPU (the "host") may map GPU memory buffers.  However, they must
register an *MMU notifier*.  This is a callback that Linux core memory
management code ("MM") uses to tell the driver that it must stop
all accesses to the memory.  Once the memory is no longer accessed,
Linux assumes it can do whatever it wants with this memory:

- The GPU driver can move it from VRAM to system RAM or visa versa,
  move it within VRAM or system RAM, or it temporarily inaccessible
  so that other VRAM can be accessed.
- MM can swap the page out to disk/zram/etc.
- MM can move the page in system RAM to create huge pages.
- MM can write the pages out to their backing files and then free them.
- Anything else in Linux can do whatever it wants with the memory.

Suspending access to memory is not allowed to block indefinitely.
It can sleep, but it must finish in finite time regardless of what
userspace (or other VMs) do.  Otherwise, bad things (which I believe
includes deadlocks) may result.  I believe it can fail temporarily,
but permanent failure is also not allowed.  Once the MMU notifier
has succeeded, userspace or other domains **must not be allowed to
access the memory**.  This would be an exploitable use-after-free
vulnerability.

Due to these requirements, MMU notifier callbacks must not require
cooperation from other guests.  This means that they are not allowed to
wait for memory that has been granted to another guest to no longer
be mapped by that guest.  Therefore, MMU notifiers and the use of
grant tables are inherently incompatible.

## Memory lending: A different approach

Instead, xen_privcmd must use a different hypercall to _lend_ memory to
another domain (the "guest").  When MM triggers the guest MMU notifier,
xen_privcmd _tells_ Xen (via hypercall) to revoke the guest's access
to the memory.  This hypercall _must succeed in bounded time_ even
if the guest is malicious.

Since the other guests are not aware this has happened, they will
continue to access the memory.  This will cause p2m faults, which
trap to Xen.  Xen normally kills the guest in this situation which is
obviously not desired behavior.  Instead, Xen must pause the guest
and inform the host's kernel.  xen_privcmd will have registered a
handler for such events, so it will be informed when this happens.

When xen_privcmd is told that a guest wants to access the revoked
page, it will ask core MM to make the page available.  Once the page
_is_ available, core MM will inform xen_privcmd, which will in turn
provide a page to Xen that will be mapped into the guest's stage 2
translation tables.  This page will generally be different than the
one that was originally lent.

Requesting a new page can fail.  This is usually due to rare errors,
such as a GPU being hot-unplugged or an I/O error faulting pages
from disk.  In these cases, the old content of the page is lost.

When this happens, xen_privcmd can do one of two things:

1. It can provide a page that is filled with zeros.
2. It can tell Xen that it is unable to fulfill the request.

Which choice it makes is under userspace control.  If userspace
chooses the second option, Xen injects a fault into the guest.
It is up to the guest to handle the fault correctly.

## Restrictions on lent memory

Lent memory is still considered to belong to the lending domain.
The borrowing domain can only access it via its p2m.  Hypercalls made
by the borrowing domain act as if the borrowed memory was not present.
This includes, but is not limited to:

- Using pointers to borrowed memory in hypercall arguments.
- Granting borrowed memory to other VMs.
- Any other operation that depends on whether a page is accessible
  by a domain.

Furthermore:

- Borrowed memory isn't mapped into the IOMMU of any PCIe devices
  the guest has attached, because IOTLB faults generally are not
  replayable.

- Foreign mapping hypercalls that reference lent memory will fail.
  Otherwise, the domain making the foreign mapping hypercall could
  continue to access the borrowed memory after the lease had been
  revoked.  This is true even if the domain performing the foreign
  mapping is an all-powerful dom0.  Otherwise, an emulated device
  could access memory whose lease had been revoked.

This also means that live migration of a domain that has borrowed
memory requires cooperation from the lending domain.  For now, it
will be considered out of scope.  Live migration is typically used
with server workloads, and accelerators for server hardware often
support SR-IOV.

## Where will lent memory appear in a guest's address space?

Typically, lent memory will be an emulated PCI BAR.  It may be emulated
by dom0 or an alternate ioreq server.  However, it is not *required*
to be a PCI BAR.

## Privileges required for memory lending

For obvious reasons, the domain lending the memory must be privileged
over the domain borrowing it.  The lending domain does not inherently
need to be privileged over the whole system.  However, supporting
situations where the providing domain is not dom0 will require
extensions to Xen's permission model, except for the case where the
providing domain only serves a single VM.

Memory lending hypercalls are not subject to the restrictions of
XSA-77.  They may safely be delegated to VMs other than dom0.

## Userspace API

To the extent possible, the memory lending API should be similar
to KVM's uAPI.  Ideally, userspace should be able to abstract over
the differences.  Using the API should not require root privileges
or be equivalent to root on the host.  It should only require a file
descriptor that only allows controlling a single domain.

## Future directions: Creating & running Xen VMs without special privileges

With the exception of a single page used for hypercalls, it is
possible for a Xen domain to *only* have borrowed memory.  Such a
domain can be managed by an entirely unprivileged userspace process,
just like it would manage a KVM VM.  Since the "host" in this scenario
only needs privilege over a domain it itself created, it is possible
(once a subset of XSA-77 restrictions are lifted) for this domain
to not actually be dom0.

Even with XSA-77, the domain could still request dom0 to create and
destroy the domain on its behalf.  Qubes OS already allows unprivileged
guests to cause domain creation and destruction, so this does not
introduce any new Xen attack surface.

This could allow unprivileged processes in a domU to create and manage
sub-domUs, just as if the domU had nested virtualization support and
KVM was used.  However, this should provide significantly better
performance than nested virtualization.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

Attachment: OpenPGP_0xB288B55FFF9C22C1.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.