[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Mapping memory into a domain


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Alejandro Vallejo <agarciav@xxxxxxx>
  • From: Demi Marie Obenour <demiobenour@xxxxxxxxx>
  • Date: Fri, 9 May 2025 14:21:57 -0400
  • Autocrypt: addr=demiobenour@xxxxxxxxx; keydata= xsFNBFp+A0oBEADffj6anl9/BHhUSxGTICeVl2tob7hPDdhHNgPR4C8xlYt5q49yB+l2nipd aq+4Gk6FZfqC825TKl7eRpUjMriwle4r3R0ydSIGcy4M6eb0IcxmuPYfbWpr/si88QKgyGSV Z7GeNW1UnzTdhYHuFlk8dBSmB1fzhEYEk0RcJqg4AKoq6/3/UorR+FaSuVwT7rqzGrTlscnT DlPWgRzrQ3jssesI7sZLm82E3pJSgaUoCdCOlL7MMPCJwI8JpPlBedRpe9tfVyfu3euTPLPx wcV3L/cfWPGSL4PofBtB8NUU6QwYiQ9Hzx4xOyn67zW73/G0Q2vPPRst8LBDqlxLjbtx/WLR 6h3nBc3eyuZ+q62HS1pJ5EvUT1vjyJ1ySrqtUXWQ4XlZyoEFUfpJxJoN0A9HCxmHGVckzTRl 5FMWo8TCniHynNXsBtDQbabt7aNEOaAJdE7to0AH3T/Bvwzcp0ZJtBk0EM6YeMLtotUut7h2 Bkg1b//r6bTBswMBXVJ5H44Qf0+eKeUg7whSC9qpYOzzrm7+0r9F5u3qF8ZTx55TJc2g656C 9a1P1MYVysLvkLvS4H+crmxA/i08Tc1h+x9RRvqba4lSzZ6/Tmt60DPM5Sc4R0nSm9BBff0N m0bSNRS8InXdO1Aq3362QKX2NOwcL5YaStwODNyZUqF7izjK4QARAQABzTxEZW1pIE1hcmll IE9iZW5vdXIgKGxvdmVyIG9mIGNvZGluZykgPGRlbWlvYmVub3VyQGdtYWlsLmNvbT7CwXgE EwECACIFAlp+A0oCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJELKItV//nCLBhr8Q AK/xrb4wyi71xII2hkFBpT59ObLN+32FQT7R3lbZRjVFjc6yMUjOb1H/hJVxx+yo5gsSj5LS 9AwggioUSrcUKldfA/PKKai2mzTlUDxTcF3vKx6iMXKA6AqwAw4B57ZEJoMM6egm57TV19kz PMc879NV2nc6+elaKl+/kbVeD3qvBuEwsTe2Do3HAAdrfUG/j9erwIk6gha/Hp9yZlCnPTX+ VK+xifQqt8RtMqS5R/S8z0msJMI/ajNU03kFjOpqrYziv6OZLJ5cuKb3bZU5aoaRQRDzkFIR 6aqtFLTohTo20QywXwRa39uFaOT/0YMpNyel0kdOszFOykTEGI2u+kja35g9TkH90kkBTG+a EWttIht0Hy6YFmwjcAxisSakBuHnHuMSOiyRQLu43ej2+mDWgItLZ48Mu0C3IG1seeQDjEYP tqvyZ6bGkf2Vj+L6wLoLLIhRZxQOedqArIk/Sb2SzQYuxN44IDRt+3ZcDqsPppoKcxSyd1Ny 2tpvjYJXlfKmOYLhTWs8nwlAlSHX/c/jz/ywwf7eSvGknToo1Y0VpRtoxMaKW1nvH0OeCSVJ itfRP7YbiRVc2aNqWPCSgtqHAuVraBRbAFLKh9d2rKFB3BmynTUpc1BQLJP8+D5oNyb8Ts4x Xd3iV/uD8JLGJfYZIR7oGWFLP4uZ3tkneDfYzsFNBFp+A0oBEAC9ynZI9LU+uJkMeEJeJyQ/ 8VFkCJQPQZEsIGzOTlPnwvVna0AS86n2Z+rK7R/usYs5iJCZ55/JISWd8xD57ue0eB47bcJv VqGlObI2DEG8TwaW0O0duRhDgzMEL4t1KdRAepIESBEA/iPpI4gfUbVEIEQuqdqQyO4GAe+M kD0Hy5JH/0qgFmbaSegNTdQg5iqYjRZ3ttiswalql1/iSyv1WYeC1OAs+2BLOAT2NEggSiVO txEfgewsQtCWi8H1SoirakIfo45Hz0tk/Ad9ZWh2PvOGt97Ka85o4TLJxgJJqGEnqcFUZnJJ riwoaRIS8N2C8/nEM53jb1sH0gYddMU3QxY7dYNLIUrRKQeNkF30dK7V6JRH7pleRlf+wQcN fRAIUrNlatj9TxwivQrKnC9aIFFHEy/0mAgtrQShcMRmMgVlRoOA5B8RTulRLCmkafvwuhs6 dCxN0GNAORIVVFxjx9Vn7OqYPgwiofZ6SbEl0hgPyWBQvE85klFLZLoj7p+joDY1XNQztmfA rnJ9x+YV4igjWImINAZSlmEcYtd+xy3Li/8oeYDAqrsnrOjb+WvGhCykJk4urBog2LNtcyCj kTs7F+WeXGUo0NDhbd3Z6AyFfqeF7uJ3D5hlpX2nI9no/ugPrrTVoVZAgrrnNz0iZG2DVx46 x913pVKHl5mlYQARAQABwsFfBBgBAgAJBQJafgNKAhsMAAoJELKItV//nCLBwNIP/AiIHE8b oIqReFQyaMzxq6lE4YZCZNj65B/nkDOvodSiwfwjjVVE2V3iEzxMHbgyTCGA67+Bo/d5aQGj gn0TPtsGzelyQHipaUzEyrsceUGWYoKXYyVWKEfyh0cDfnd9diAm3VeNqchtcMpoehETH8fr RHnJdBcjf112PzQSdKC6kqU0Q196c4Vp5HDOQfNiDnTf7gZSj0BraHOByy9LEDCLhQiCmr+2 E0rW4tBtDAn2HkT9uf32ZGqJCn1O+2uVfFhGu6vPE5qkqrbSE8TG+03H8ecU2q50zgHWPdHM OBvy3EhzfAh2VmOSTcRK+tSUe/u3wdLRDPwv/DTzGI36Kgky9MsDC5gpIwNbOJP2G/q1wT1o Gkw4IXfWv2ufWiXqJ+k7HEi2N1sree7Dy9KBCqb+ca1vFhYPDJfhP75I/VnzHVssZ/rYZ9+5 1yDoUABoNdJNSGUYl+Yh9Pw9pE3Kt4EFzUlFZWbE4xKL/NPno+z4J9aWemLLszcYz/u3XnbO vUSQHSrmfOzX3cV4yfmjM5lewgSstoxGyTx2M8enslgdXhPthZlDnTnOT+C+OTsh8+m5tos8 HQjaPM01MKBiAqdPgksm1wu2DrrwUi6ChRVTUBcj6+/9IJ81H2P2gJk3Ls3AVIxIffLoY34E +MYSfkEjBz0E8CLOcAw7JIwAaeBT
  • Cc: Xenia Ragiadakou <Xenia.Ragiadakou@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Xen developer discussion <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 09 May 2025 18:21:20 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 5/9/25 6:50 AM, Roger Pau Monné wrote:
> On Fri, May 09, 2025 at 11:47:36AM +0200, Alejandro Vallejo wrote:
>>>>>>> A Linux driver that needs access to userspace memory
>>>>>>> pages can get it in two different ways:
>>>>>>>
>>>>>>> 1. It can pin the pages using the pin_user_pages family of APIs.
>>>>>>>    If these functions succeed, the driver is guaranteed to be able
>>>>>>>    to access the pages until it unpins them.  However, this also
>>>>>>>    means that the pages cannot be paged out or migrated.  Furthermore,
>>>>>>>    file-backed pages cannot be safely pinned, and pinning GPU memory
>>>>>>>    isn’t supported.  (At a minimum, it would prevent the pages from
>>>>>>>    migrating from system RAM to VRAM, so all access by a dGPU would
>>>>>>>    cross the PCIe bus, which would be very slow.)
>>>>>>
>>>>>> From a Xen p2m this is all fine - Xen will never remove pages from the
>>>>>> p2m unless it's requested to.  So the pining, while needed on the Linux
>>>>>> side, doesn't need to be propagated to Xen I would think.
>>
>> It might still be helpful to have the concept of pinning to avoid them
>> being evicted for other reasons (ballooning?). I don't think it'd be
>> sane to allow returning to Xen a page that a domain ever shared with a
>> device.
> 
> If mapped using the p2m_mmio_direct type in the p2m a domain won't be
> able to balloon them out.  It would also be misguided for a guest
> kernel to attempt to balloon out memory that I presume will be inside
> of a PCI device BAR from the guest point of view.

Indeed it will be inside a BAR.

>> re: being requested. Are there real promises from Xen to that effect? I
>> could make a hypervisor oversubscribing on memory that swaps non-IOVA
>> mem in and out to disk, moving it around all the time and it would be
>> compliant with the current behaviour AIUI, but it wouldn't work with
>> this scheme, because the mfn's would be off more often than not.
> 
> Even if Xen supported domain memory swapping, that could never be used
> with domains that have devices attached, as it's not possible to fixup
> the p2m on IOMMU fault and retry the access.
> 
> Not sure you could even move mfns around, as you would need an atomic
> way to copy the previous page contents and set the PTE to point to the
> new page.
> 
> Unless you want to get into a (IMO) complicated scheme where the
> domain notifies the hypervisor which ranges are being used for device
> DMA accesses (and thus requires guest kernel changes), I think
> swapping of guest memory when there are assigned devices is a no-go.
> 
> Xen has (or had? as I never actually seen it being used) a mechanism
> to swap domain memory to a dom0 file (see tools/xenpaging.c).  However
> more than one provider had mentioned to me that one feature they
> particularly preferred of Xen over KVM is that it would never swap
> guest memory.  Not sure if that's still the case, but some struggled
> to prevent KVM from swapping guest memory, and got complains of
> slowness from their tenants.
> 
> For the purposes of getting a prototype I would suggest that you
> assume p2m memory cannot be randomly swapped out, unless requested by
> either the guest or the control domain.

The API being discussed here needs to support frontends that have
assigned PCI devices, but the pages should never be mapped into
the frontend domain’s IOMMU context.  If the frontend tries to
DMA into one of these pages it’s a frontend bug.

>>>>> If pinning were enough things would be simple, but sadly it’s not.
>>>>>
>>>>>>> 2. It can grab the *current* location of the pages and register an
>>>>>>>    MMU notifier.  This works for GPU memory and file-backed memory.
>>>>>>>    However, when the invalidate_range function of this callback, the
>>>>>>>    driver *must* stop all further accesses to the pages.
>>>>>>>
>>>>>>>    The invalidate_range callback is not allowed to block for a long
>>>>>>>    period of time.  My understanding is that things like dirty page
>>>>>>>    writeback are blocked while the callback is in progress.  My
>>>>>>>    understanding is also that the callback is not allowed to fail.
>>>>>>>    I believe it can return a retryable error but I don’t think that
>>>>>>>    it is allowed to keep failing forever.
>>>>>>>
>>>>>>>    Linux’s grant table driver actually had a bug in this area, which
>>>>>>>    led to deadlocks.  I fixed that a while back.
>>>>>>>
>>>>>>> KVM implements the second option: it maps pages into the stage-2
>>>>>>> page tables (or shadow page tables, if that is chosen) and unmaps
>>>>>>> them when the invalidate_range callback is called.
>>
>> I'm still lost as to what is where, who initiates what and what the end
>> goal is. Is this about using userspace memory in dom0, and THEN sharing
>> that with guests for as long as its live? And make enough magic so the
>> guests don't notice the transitionary period in which there may not be
>> any memory?
>>
>> Or is this about using domU memory for the driver living in dom0?
>>
>> Or is this about something else entirely?
>>
>> For my own education. Is the following sequence diagram remotely accurate?
>>
>> dom0                              domU
>>  |                                  |
>>  |---+                              |
>>  |   | use gfn3 in the driver       |
>>  |   | (mapped on user thread)      |
>>  |<--+                              |
>>  |                                  |
>>  |  map mfn(gfn3) in domU BAR       |
>>  |--------------------------------->|
>>  |                              +---|
>>  |              happily use BAR |   |
>>  |                              +-->|
>>  |---+                              |
>>  |   | mmu notifier for gfn3        |
>>  |   | (invalidate_range)           |
>>  |<--+                              |
>>  |                                  |
>>  |  unmap mfn(gfn3)                 |
>>  |--------------------------------->| <--- Plus some means to making guest 
>>  |---+                          +---|      vCPUs pause on access.
>>  |   | reclaim gfn3    block on |   |
>>  |<--+                 access   |   |
>>  |                              |   |
>>  |---+                          |   |
>>  |   | use gfn7 in the driver   |   |
>>  |   | (mapped on user thread)  |   |
>>  |<--+                          |   |
>>  |                              |   |
>>  |  map mfn(gfn7) in domU BAR   |   |
>>  |------------------------------+-->| <--- Unpause blocked domU vCPUs
> 
> The guest vCPU will already pause on access if there's a p2m
> violation, until the ioreq has completed and the vCPU execution can
> resume.  That's in control of the ioreq server that handles the
> request.
> 
> I don't know about the dom0 user-space part, but that's possibly of no
> concern for the implementation side in Xen?

I believe so, yes.

> My understanding of the actions needed from the Xen side is:
> 
>  1. Map either RAM owned by the hardware domain or an MMIO page into
>     a domain p2m.
>  2. Remove entries from a domain p2m.
>  3. Handle p2m violations resulting from guest accesses, using 1. and
>     force a guest access retry (or emulate the access).
> 
> 1. Can possibly be done with XEN_DOMCTL_memory_mapping and
> XENMEM_add_to_physmap_batch, but as I understood it it's not ideal.
> Demi would like a way to use the same hypercall to map either RAM or
> IOMEM into a domain p2m.

Indeed so, and also the backend domain might be a driver domain instead
of the hardware domain.  It needs to have privilege over the frontend,
but it should not need privilege over the whole system.

> 2. What hypercall to use depends on how the memory is mapped.
> 
> 3. ioreq servers will already get requests for accesses to unmapped
> regions they have registered for.  If the access is to be retried we
> need to expand ioreq interface a bit to handle this case.  Adding a
> new ioreq state like STATE_IORESP_RETRY might be enough?  Maybe I'm
> being naive though.

This is where an implementation in a real userspace emulator would
be very useful, to ensure that the API being implemented is actually
usable in practice.

>>>>> - The switch from “emulated MMIO” to “MMIO or real RAM” needs to
>>>>>   be atomic from the guest’s perspective.
>>>>
>>>> Updates of p2m PTEs are always atomic.
>>> That’s good.
>>
>> Updates to a single PTE are atomic, sure. But mapping/unmapping sizes
>> not congruent with a whole superpage size (i.e: 256 KiB, more than a
>> page, less than a superpage) wouldn't be, as far as the guest is
>> concerned.
> 
> I've assumed the question was towards PTE updates, as to whether
> PTE entries where always consistent.
> 
>> But if my understanding above is correct maybe it doesn't matter? It
>> only needs to be atomic wrt the hypercall that requests it, so that the
>> gfn is never reused while the guest p2m still holds that mfn.
> 
> I think it only matters that the PTE is always consistent, either
> mapped or unmapped (and thus generate an ioreq request on access when
> unmapped).
You are correct.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

Attachment: OpenPGP_0xB288B55FFF9C22C1.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.