[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] drm/xen-front: Make shmem backed display buffer coherent


> > Sure this actually helps?  It's below 4G in guest physical address
> > space, so it can be backed by pages which are actually above 4G in host
> > physical address space ...
> Yes, you are right here. This is why I wrote about the IOMMU
> and other conditions. E.g. you can have a device which only
> expects 32-bit, but thanks to IOMMU it can access pages above
> 4GiB seamlessly. So, this is why I *hope* that this code *may* help
> such devices. Do you think I don't need that and have to remove?

I would try without that, and maybe add a runtime option (module
parameter) later if it turns out some hardware actually needs that.
Devices which can do 32bit DMA only become less and less common these

> > > > > +    if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents,
> > > > > +            DMA_BIDIRECTIONAL)) {
> > > > 
> > > > Are you using the DMA streaming API as a way to flush the caches?
> > > Yes
> > > > Does this mean that GFP_USER isn't making the buffer coherent?
> > > No, it didn't help. I had a question [1] if there are any other better way
> > > to achieve the same, but didn't have any response yet. So, I implemented
> > > it via DMA API which helped.
> > set_pages_array_*() ?
> > 
> > See arch/x86/include/asm/set_memory.h
> Well, x86... I am on arm which doesn't define that...

Oh, arm.  Maybe ask on a arm list then.  I know on arm you have to care
about caching a lot more, but that also is where my knowledge ends ...

Using dma_map_sg for cache flushing looks like a sledge hammer approach
to me.  But maybe it is needed to make xen flush the caches (xen guests
have their own dma mapping implementation, right?  Or is this different
on arm than on x86?).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.