[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Why memory lending is needed for GPU acceleration



Le 29/03/2026 à 19:32, Demi Marie Obenour a écrit :
> On 3/24/26 10:17, Demi Marie Obenour wrote:
>> Here is a proposed design document for supporting mapping GPU VRAM
>> and/or file-backed memory into other domains.  It's not in the form of
>> a patch because the leading + characters would just make it harder to
>> read for no particular gain, and because this is still RFC right now.
>> Once it is ready to merge, I'll send a proper patch.  Nevertheless,
>> you can consider this to be
>>
>> Signed-off-by: Demi Marie Obenour <demiobenour@xxxxxxxxx>
>>
>> This approach is very different from the "frontend-allocates"
>> approach used elsewhere in Xen.  It is very much Linux-centric,
>> rather than Xen-centric.  In fact, MMU notifiers were invented for
>> KVM, and this approach is exactly the same as the one KVM implements.
>> However, to the best of my understanding, the design described here is
>> the only viable one.  Linux MM and GPU drivers require it, and changes
>> to either to relax this requirement will not be accepted upstream.
>
> Teddy Astie (CCd) proposed a couple of alternatives on Matrix:
>
> 1. Create dma-bufs for guest pages and import them into the host.
>
>     This is a win not only for Xen, but also for KVM.  Right now, shared
>     (CPU) memory buffers must be copied from the guest to the host,
>     which is pointless.  So fixing that is a good thing!  That said,
>     I'm still concerned about triggering GPU driver code-paths that
>     are not tested on bare metal.
>
> 2. Use PASID and 2-stage translation so that the GPU can operate in
>     guest physical memory.
>
>     This is also a win.  AMD XDNA absolutely requires PASID support,
>     and apparently AMD GPUs can also use PASID.  So being able to use
>     PASID is certainly helpful.
>
> However, I don't think either approach is sufficient for two reasons.
>
> First, discrete GPUs have dedicated VRAM, which Xen knows nothing about.
> Only dom0's GPU drivers can manage VRAM, and they will insist on being
> able to migrate it between the CPU and the GPU.  Furthermore, VRAM
> can only be allocated using GPU driver ioctls, which will allocate
> it from dom0-owned memory.
>
> Second, Certain Wayland protocols, such as screencapture, require programs
> to be able to import dmabufs.  Both of the above solutions would
> require that the pages be pinned.  I don't think this is an option,
> as IIUC pin_user_pages() fails on mappings of these dmabufs.  It's why
> direct I/O to dmabufs doesn't work.
>

I suppose it fails because of the RAM/VRAM constraint you said
previously. If the location of the memory stays the same (i.e guest
memory mapping), pin should be almost "no-op".

(though, having dma-buf buffers coming from GPU drivers failing to pin
is probably not a good thing in term of stability; some stuff like
cameras probably break as a result; but I'm not a expert on that subject)

> To the best of my knowledge, these problems mean that lending memory
> is the only way to get robust GPU acceleration for both graphics and
> compute workloads under Xen.  Simpler approaches might work for pure
> compute workloads, for iGPUs, or for drivers that have Xen-specific
> changes.  None of them, however, support graphics workloads on dGPUs
> while using the GPU driver the same way bare metal workloads do.
>
> Linux's graphics stack is massive, and trying to adapt it to work with
> Xen isn't going to be sustainable in the long term.  Adapting Xen to
> fit the graphics stack is probably more work up front, but it has the
> advantage of working with all GPU drivers, including ones that have not
> been written yet.  It also means that the testing done on bare metal is
> still applicable, and that bugs found when using this driver can either
> be reproduced on bare metal or can be fixed without driver changes.

One of my main concerns was about whether dma-buf can be used as
"general purpose" GPU buffers; what I read in driver code suggest it
should be fine, but it's a bit on the edge.

>
> Finally, I'm not actually attached to memory lending at all.  It's a
> lot of complexity, and it's not at all similar to how the rest of
> Xen works.  If someone else can come up with a better solution that
> doesn't require GPU driver changes, I'd be all for it.  Unfortunately,
> I suspect none exists.  One can make almost anything work if one is
> willing to patch the drivers, but I am virtually certain that this
> will not be long-term sustainable.
>

There's also the virtio-gpu side to consider. Blob mechanism appears to
insist that GPU memory to come from the host by allowing buffers that
aren't bound to virtio-gpu BAR yet (that also complexifies the KVM
situation).

You can have GPU memory that exists in virtio-gpu, without being
guest-visible, then the guest can map it on its own BAR.

> If Xen had its own GPU drivers, the situation would be totally
> different.  However, Xen must rely on Linux's GPU drivers, and that
> means it must play by their rules.




--
Teddy Astie | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.