[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Why memory lending is needed for GPU acceleration


  • To: Teddy Astie <teddy.astie@xxxxxxxxxx>, Demi Marie Obenour <demiobenour@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 30 Mar 2026 12:25:25 +0200
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:From:Cc:Content-Language:References:To:Subject:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Xen developer discussion <xen-devel@xxxxxxxxxxxxxxxxxxxx>, dri-devel@xxxxxxxxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, Val Packett <val@xxxxxxxxxxxxxxxxxxxxxx>, Ariadne Conill <ariadne@ariadne.space>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Mon, 30 Mar 2026 10:25:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 30.03.2026 12:15, Teddy Astie wrote:
> Le 29/03/2026 à 19:32, Demi Marie Obenour a écrit :

May I ask that the two of you please properly separate To: vs Cc:?

Thanks, Jan

>> On 3/24/26 10:17, Demi Marie Obenour wrote:
>>> Here is a proposed design document for supporting mapping GPU VRAM
>>> and/or file-backed memory into other domains.  It's not in the form of
>>> a patch because the leading + characters would just make it harder to
>>> read for no particular gain, and because this is still RFC right now.
>>> Once it is ready to merge, I'll send a proper patch.  Nevertheless,
>>> you can consider this to be
>>>
>>> Signed-off-by: Demi Marie Obenour <demiobenour@xxxxxxxxx>
>>>
>>> This approach is very different from the "frontend-allocates"
>>> approach used elsewhere in Xen.  It is very much Linux-centric,
>>> rather than Xen-centric.  In fact, MMU notifiers were invented for
>>> KVM, and this approach is exactly the same as the one KVM implements.
>>> However, to the best of my understanding, the design described here is
>>> the only viable one.  Linux MM and GPU drivers require it, and changes
>>> to either to relax this requirement will not be accepted upstream.
>>
>> Teddy Astie (CCd) proposed a couple of alternatives on Matrix:
>>
>> 1. Create dma-bufs for guest pages and import them into the host.
>>
>>     This is a win not only for Xen, but also for KVM.  Right now, shared
>>     (CPU) memory buffers must be copied from the guest to the host,
>>     which is pointless.  So fixing that is a good thing!  That said,
>>     I'm still concerned about triggering GPU driver code-paths that
>>     are not tested on bare metal.
>>     
>> 2. Use PASID and 2-stage translation so that the GPU can operate in
>>     guest physical memory.
>>     
>>     This is also a win.  AMD XDNA absolutely requires PASID support,
>>     and apparently AMD GPUs can also use PASID.  So being able to use
>>     PASID is certainly helpful.
>>
>> However, I don't think either approach is sufficient for two reasons.
>>
>> First, discrete GPUs have dedicated VRAM, which Xen knows nothing about.
>> Only dom0's GPU drivers can manage VRAM, and they will insist on being
>> able to migrate it between the CPU and the GPU.  Furthermore, VRAM
>> can only be allocated using GPU driver ioctls, which will allocate
>> it from dom0-owned memory.
>>
>> Second, Certain Wayland protocols, such as screencapture, require programs
>> to be able to import dmabufs.  Both of the above solutions would
>> require that the pages be pinned.  I don't think this is an option,
>> as IIUC pin_user_pages() fails on mappings of these dmabufs.  It's why
>> direct I/O to dmabufs doesn't work.
>>
> 
> I suppose it fails because of the RAM/VRAM constraint you said 
> previously. If the location of the memory stays the same (i.e guest 
> memory mapping), pin should be almost "no-op".
> 
> (though, having dma-buf buffers coming from GPU drivers failing to pin 
> is probably not a good thing in term of stability; some stuff like 
> cameras probably break as a result; but I'm not a expert on that subject)
> 
>> To the best of my knowledge, these problems mean that lending memory
>> is the only way to get robust GPU acceleration for both graphics and
>> compute workloads under Xen.  Simpler approaches might work for pure
>> compute workloads, for iGPUs, or for drivers that have Xen-specific
>> changes.  None of them, however, support graphics workloads on dGPUs
>> while using the GPU driver the same way bare metal workloads do.
>>
>> Linux's graphics stack is massive, and trying to adapt it to work with
>> Xen isn't going to be sustainable in the long term.  Adapting Xen to
>> fit the graphics stack is probably more work up front, but it has the
>> advantage of working with all GPU drivers, including ones that have not
>> been written yet.  It also means that the testing done on bare metal is
>> still applicable, and that bugs found when using this driver can either
>> be reproduced on bare metal or can be fixed without driver changes.
> 
> One of my main concerns was about whether dma-buf can be used as 
> "general purpose" GPU buffers; what I read in driver code suggest it 
> should be fine, but it's a bit on the edge.
> 
>>
>> Finally, I'm not actually attached to memory lending at all.  It's a
>> lot of complexity, and it's not at all similar to how the rest of
>> Xen works.  If someone else can come up with a better solution that
>> doesn't require GPU driver changes, I'd be all for it.  Unfortunately,
>> I suspect none exists.  One can make almost anything work if one is
>> willing to patch the drivers, but I am virtually certain that this
>> will not be long-term sustainable.
>>
> 
> There's also the virtio-gpu side to consider. Blob mechanism appears to 
> insist that GPU memory to come from the host by allowing buffers that 
> aren't bound to virtio-gpu BAR yet (that also complexifies the KVM 
> situation).
> 
> You can have GPU memory that exists in virtio-gpu, without being 
> guest-visible, then the guest can map it on its own BAR.
> 
>> If Xen had its own GPU drivers, the situation would be totally
>> different.  However, Xen must rely on Linux's GPU drivers, and that
>> means it must play by their rules.
> 
> 
> 
> 
> --
> Teddy Astie | Vates XCP-ng Developer
> 
> XCP-ng & Xen Orchestra - Vates solutions
> 
> web: https://vates.tech
> 
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.