[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv

On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> I forgot to include this brief information about this patch series.
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.
> Detailed information about this driver is described in a high-level doc
> added by the second patch of the series.
> [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> I am attaching 'Overview' section here as a summary.
> ------------------------------------------------------------------------------
> Section 1. Overview
> ------------------------------------------------------------------------------
> Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> where multiple different OS instances need to share same physical data without
> data-copy across VMs.
> To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> for the buffer to the importing VM (so called, “importer”).
> Another instance of the Hyper_DMABUF driver on importer registers
> a hyper_dmabuf_id together with reference information for the shared physical
> pages associated with the DMA_BUF to its database when the export happens.
> The actual mapping of the DMA_BUF on the importer’s side is done by
> the Hyper_DMABUF driver when user space issues the IOCTL command to access
> the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> exporting driver as is, that is, no special configuration is required.
> Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> exchange.

So I know that most dma-buf implementations (especially lots of importers
in drivers/gpu) break this, but fundamentally only the original exporter
is allowed to know about the underlying pages. There's various scenarios
where a dma-buf isn't backed by anything like a struct page.

So your first step of noodling the underlying struct page out from the
dma-buf is kinda breaking the abstraction, and I think it's not a good
idea to have that. Especially not for sharing across VMs.

I think a better design would be if hyper-dmabuf would be the dma-buf
exporter in both of the VMs, and you'd import it everywhere you want to in
some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
in control of the pages, and a lot of the troubling forwarding you
currently need to do disappears.

2nd thing: This seems very much related to what's happening around gvt and
allowing at least the host (in a kvm based VM environment) to be able to
access some of the dma-buf (or well, framebuffers in general) that the
client is using. Adding some mailing lists for that.

> ------------------------------------------------------------------------------
> There is a git repository at github.com where this series of patches are all
> integrated in Linux kernel tree based on the commit:
>         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>         Author: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
>         Date:   Sun Dec 3 11:01:47 2017 -0500
>             Linux 4.15-rc2
> https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> _______________________________________________
> dri-devel mailing list
> dri-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

Daniel Vetter
Software Engineer, Intel Corporation

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.