[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Mapping memory into a domain


  • To: Demi Marie Obenour <demiobenour@xxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Xenia Ragiadakou <Xenia.Ragiadakou@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Alejandro Vallejo <agarciav@xxxxxxx>
  • Date: Fri, 9 May 2025 11:47:36 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fP5/BJn2vVayGmthZ/RT7iF/lo9PSwS3SPqq2i68XMA=; b=noZ3ecnwAi7ZrhK5MHwnCQOzojenTX7h0nvAY/otK/dppR9pnFv4MEZ9ukyV8fMWMN098ndUoY2U6cSqjEyveSfQOMen/bhqdcB1T4YdWwRmKe7FavDrDXPyYuQCMSdmbQ6U6sUjWaHOwX1DkxHwxhfxq4mg09RP2Tk6OJsKptxtD/Krmimz/oORZDB4Sn2jnakwqafsU2hwG4qqz49D/aqKFnhoPQIxeTtIw0teSDALsELBiR+2wR2/r8BWpwc9x5ouwBp33nkjRkug6RwPEWdu9TY44/LOpkA79l6BlPTm8BodBUj8jKw6RressrYfNy9fdj2huoDeuUTcMWjxTw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VWumzrbMMy5Lz3B9FAAb5YIhOT1g8MbW9kCjMGwCTVM4PA723JqxhGBq+69Ze2mlbLarea27xZwUHPF4ciWfeUJTyjSjeKfs9U0H/CqOW5rvOS0mfRoOJncXseFF/3TqMlHm3i8g3r0GpfxkpggoRPZCrpIaxltYyDL4LFEsSCXv2P58sE95hKr/wxi/Wgz/tSDscbyFqvngDP7dDY0OkkgMjRlO/jZIzmlccs6CE0JL1iIN4YNnHgHp9Oi74eBnFyBn5CkA/AVhWiREjEIWj+/NYBL8Z1mspnoKT+iRgJakEz50ILQS5iqf7Y+tFwthRh5m/kPukwmIhqyrKMAKfw==
  • Cc: Xen developer discussion <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 09 May 2025 09:47:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

>>>>> A Linux driver that needs access to userspace memory
>>>>> pages can get it in two different ways:
>>>>>
>>>>> 1. It can pin the pages using the pin_user_pages family of APIs.
>>>>>    If these functions succeed, the driver is guaranteed to be able
>>>>>    to access the pages until it unpins them.  However, this also
>>>>>    means that the pages cannot be paged out or migrated.  Furthermore,
>>>>>    file-backed pages cannot be safely pinned, and pinning GPU memory
>>>>>    isn’t supported.  (At a minimum, it would prevent the pages from
>>>>>    migrating from system RAM to VRAM, so all access by a dGPU would
>>>>>    cross the PCIe bus, which would be very slow.)
>>>>
>>>> From a Xen p2m this is all fine - Xen will never remove pages from the
>>>> p2m unless it's requested to.  So the pining, while needed on the Linux
>>>> side, doesn't need to be propagated to Xen I would think.

It might still be helpful to have the concept of pinning to avoid them
being evicted for other reasons (ballooning?). I don't think it'd be
sane to allow returning to Xen a page that a domain ever shared with a
device.

re: being requested. Are there real promises from Xen to that effect? I
could make a hypervisor oversubscribing on memory that swaps non-IOVA
mem in and out to disk, moving it around all the time and it would be
compliant with the current behaviour AIUI, but it wouldn't work with
this scheme, because the mfn's would be off more often than not.

>>>
>>> If pinning were enough things would be simple, but sadly it’s not.
>>>
>>>>> 2. It can grab the *current* location of the pages and register an
>>>>>    MMU notifier.  This works for GPU memory and file-backed memory.
>>>>>    However, when the invalidate_range function of this callback, the
>>>>>    driver *must* stop all further accesses to the pages.
>>>>>
>>>>>    The invalidate_range callback is not allowed to block for a long
>>>>>    period of time.  My understanding is that things like dirty page
>>>>>    writeback are blocked while the callback is in progress.  My
>>>>>    understanding is also that the callback is not allowed to fail.
>>>>>    I believe it can return a retryable error but I don’t think that
>>>>>    it is allowed to keep failing forever.
>>>>>
>>>>>    Linux’s grant table driver actually had a bug in this area, which
>>>>>    led to deadlocks.  I fixed that a while back.
>>>>>
>>>>> KVM implements the second option: it maps pages into the stage-2
>>>>> page tables (or shadow page tables, if that is chosen) and unmaps
>>>>> them when the invalidate_range callback is called.

I'm still lost as to what is where, who initiates what and what the end
goal is. Is this about using userspace memory in dom0, and THEN sharing
that with guests for as long as its live? And make enough magic so the
guests don't notice the transitionary period in which there may not be
any memory?

Or is this about using domU memory for the driver living in dom0?

Or is this about something else entirely?

For my own education. Is the following sequence diagram remotely accurate?

dom0                              domU
 |                                  |
 |---+                              |
 |   | use gfn3 in the driver       |
 |   | (mapped on user thread)      |
 |<--+                              |
 |                                  |
 |  map mfn(gfn3) in domU BAR       |
 |--------------------------------->|
 |                              +---|
 |              happily use BAR |   |
 |                              +-->|
 |---+                              |
 |   | mmu notifier for gfn3        |
 |   | (invalidate_range)           |
 |<--+                              |
 |                                  |
 |  unmap mfn(gfn3)                 |
 |--------------------------------->| <--- Plus some means to making guest 
 |---+                          +---|      vCPUs pause on access.
 |   | reclaim gfn3    block on |   |
 |<--+                 access   |   |
 |                              |   |
 |---+                          |   |
 |   | use gfn7 in the driver   |   |
 |   | (mapped on user thread)  |   |
 |<--+                          |   |
 |                              |   |
 |  map mfn(gfn7) in domU BAR   |   |
 |------------------------------+-->| <--- Unpause blocked domU vCPUs
 |                                  |

>>> - The switch from “emulated MMIO” to “MMIO or real RAM” needs to
>>>   be atomic from the guest’s perspective.
>> 
>> Updates of p2m PTEs are always atomic.
> That’s good.

Updates to a single PTE are atomic, sure. But mapping/unmapping sizes
not congruent with a whole superpage size (i.e: 256 KiB, more than a
page, less than a superpage) wouldn't be, as far as the guest is
concerned.

But if my understanding above is correct maybe it doesn't matter? It
only needs to be atomic wrt the hypercall that requests it, so that the
gfn is never reused while the guest p2m still holds that mfn.

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.