[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Design session notes: GPU acceleration in Xen
On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote: > On 13.06.2024 20:43, Demi Marie Obenour wrote: > > GPU acceleration requires that pageable host memory be able to be mapped > > into a guest. > > I'm sure it was explained in the session, which sadly I couldn't attend. > I've been asking Ray and Xenia the same before, but I'm afraid it still > hasn't become clear to me why this is a _requirement_. After all that's > against what we're doing elsewhere (i.e. so far it has always been > guest memory that's mapped in the host). I can appreciate that it might > be more difficult to implement, but avoiding to violate this fundamental > (kind of) rule might be worth the price (and would avoid other > complexities, of which there may be lurking more than what you enumerate > below). The GPU driver knows how to allocate buffers that are usable by the GPU. On a discrete GPU, these buffers will generally be in VRAM, rather than in system RAM, because access to system RAM requires going through the PCI bus (slow). However, VRAM is a limited resource, so the driver will migrate pages between VRAM and system RAM as needed. During the migration, a guest that tries to access the pages must block until the migration is complete. Some GPU drivers support accessing externally provided memory. This is called "userptr", and is supported by i915 and amdgpu. However, it appears that some other drivers (such as MSM) do not support it, and since GPUs with VRAM need to be supported anyway, Xen still needs to support GPU driver-allocated memory. I also CCd dri-devel@xxxxxxxxxxxxxxxxxxxxx and the general GPU driver maintainers in Linux in case they can give a better answer, as well as Rob Clark who invented native contexts. > > This requires changes to all of the Xen hypervisor, Linux > > kernel, and userspace device model. > > > > ### Goals > > > > - Allow any userspace pages to be mapped into a guest. > > - Support deprivileged operation: this API must not be usable for > > privilege escalation. > > - Use MMU notifiers to ensure safety with respect to use-after-free. > > > > ### Hypervisor changes > > > > There are at least two Xen changes required: > > > > 1. Add a new flag to IOREQ that means "retry this instruction". > > > > An IOREQ server can set this flag after having successfully handled a > > page fault. It is expected that the IOREQ server has successfully > > mapped a page into the guest at the location of the fault. > > Otherwise, the same fault will likely happen again. > > Were there any thoughts on how to prevent this becoming an infinite loop? > I.e. how to (a) guarantee forward progress in the guest and (b) deal with > misbehaving IOREQ servers? Guaranteeing forward progress is up to the IOREQ server. If the IOREQ server misbehaves, an infinite loop is possible, but the CPU time used by it should be charged to the IOREQ server, so this isn't a vulnerability. > > 2. Add support for `XEN_DOMCTL_memory_mapping` to use system RAM, not > > just IOMEM. Mappings made with `XEN_DOMCTL_memory_mapping` are > > guaranteed to be able to be successfully revoked with > > `XEN_DOMCTL_memory_mapping`, so all operations that would create > > extra references to the mapped memory must be forbidden. These > > include, but may not be limited to: > > > > 1. Granting the pages to the same or other domains. > > 2. Mapping into another domain using `XEN_DOMCTL_memory_mapping`. > > 3. Another domain accessing the pages using the foreign memory APIs, > > unless it is privileged over the domain that owns the pages. > > All of which may call for actually converting the memory to kind-of-MMIO, > with a means to later convert it back. Would this support the case where the mapping domain is not fully priviliged, and where it might be a PV guest? -- Sincerely, Demi Marie Obenour (she/her/hers) Invisible Things Lab Attachment:
signature.asc
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |