[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain



Hi,

> -----Original Message-----
> From: hch@xxxxxxxxxxxxx [mailto:hch@xxxxxxxxxxxxx]
> Sent: 2019年1月24日 5:14
> To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> Cc: hch@xxxxxxxxxxxxx; Peng Fan <peng.fan@xxxxxxx>; mst@xxxxxxxxxx;
> jasowang@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxxx;
> linux-remoteproc@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx; luto@xxxxxxxxxx; jgross@xxxxxxxx;
> boris.ostrovsky@xxxxxxxxxx
> Subject: Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
> 
> On Wed, Jan 23, 2019 at 01:04:33PM -0800, Stefano Stabellini wrote:
> > If vring_use_dma_api is actually supposed to return true when
> > dma_dev->dma_mem is set, then both Peng's patch and the patch I wrote
> > are not fixing the real issue here.
> >
> > I don't know enough about remoteproc to know where the problem
> > actually lies though.
> 
> The problem is the following:
> 
> Devices can declare a specific memory region that they want to use when the
> driver calls dma_alloc_coherent for the device, this is done using the
> shared-dma-pool DT attribute, which comes in two variants that would be a
> little to much to explain here.
> 
> remoteproc makes use of that because apparently the device can only
> communicate using that region.  But it then feeds back memory obtained
> with dma_alloc_coherent into the virtio code.  For that it calls
> vmalloc_to_page on the dma_alloc_coherent, which is a huge no-go for the
> ĐMA API and only worked accidentally on a few platform, and apparently
> arm64 just changed a few internals that made it stop working for remoteproc.
> 
> The right answer is to not use the DMA API to allocate memory from a
> device-speficic region, but to tie the driver directly into the DT reserved
> memory API in a way that allows it to easilt obtain a struct device for it.

Just have a question, 

Since vmalloc_to_page is ok for cma area, no need to take cma and per device
cma into consideration right? 

we only need to implement a piece code to handle per device specific region
using RESERVEDMEM_OF_DECLARE, just like:
RESERVEDMEM_OF_DECLARE(rpmsg-dma, "rpmsg-dma-pool", 
rmem_rpmsg_dma_setup);
And implement the device_init call back and build a map between page and phys.
Then in rpmsg driver, scatter list could use page structure, no need 
vmalloc_to_page
for per device dma.

Is this the right way?

Thanks
Peng.

> 
> This is orthogonal to another issue, and that is that hardware virtio devices
> really always need to use the DMA API, otherwise we'll bypass such features
> as the device specific DMA pools, DMA offsets, cache flushing, etc, etc.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.