[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] drm/xen-front: Make shmem backed display buffer coherent



Hello, Julien!
Sorry for the late reply - it took quite some time to collect the data 
requested.

On 1/22/19 1:44 PM, Julien Grall wrote:
>
>
> On 1/22/19 10:28 AM, Oleksandr Andrushchenko wrote:
>> Hello, Julien!
>
> Hi,
>
>> On 1/21/19 7:09 PM, Julien Grall wrote:
>> Well, I didn't get the attributes of pages at the backend side, but IMO
>> those
>> do not matter in my use-case (for simplicity I am not using 
>> zero-copying at
>> backend side):
>
> They are actually important no matter what is your use case. If you 
> access the same physical page with different attributes, then you are 
> asking for trouble.
So, we have:

DomU: frontend side
====================
!PTE_RDONLY + PTE_PXN + PTE_SHARED + PTE_AF + PTE_UXN + 
PTE_ATTRINDX(MT_NORMAL)

DomD: backend side
====================
PTE_USER + !PTE_RDONLY + PTE_PXN + PTE_NG + PTE_CONT + PTE_TABLE_BIT + 
PTE_UXN + PTE_ATTRINDX(MT_NORMAL)

 From the above it seems that I don't violate cached/non-cached 
agreement here
>
> This is why Xen imposes all the pages shared to have their memory 
> attributes following some rules. Actually, speaking with Mark R., we 
> may want to tight a bit more the attributes.
>
>>
>> 1. Frontend device allocates display buffer pages which come from shmem
>> and have these attributes:
>> !PTE_RDONLY + PTE_PXN + PTE_SHARED + PTE_AF + PTE_UXN +
>> PTE_ATTRINDX(MT_NORMAL)
>
> My knowledge of Xen DRM is inexistent. However, looking at the code in 
> 5.0-rc2, I don't seem to find the same attributes. For instance 
> xen_drm_front_gem_prime_vmap and gem_mmap_obj are using 
> pgprot_writecombine. So it looks like, the mapping will be 
> non-cacheable on Arm64.
>
> Can you explain how you came up to these attributes?
pgprot_writecombine is PTE_ATTRINDX(MT_NORMAL_NC), so it seems to be 
applicable here? [1]
>
>>
>> 2. Frontend grants references to these pages and shares those with the
>> backend
>>
>> 3. Backend is a user-space application (Weston client), so it uses
>> gntdev kernel
>> driver to mmap the pages. The pages, which are used by gntdev, are those
>> coming
>> from the Xen balloon driver and I believe they are all normal memory and
>> shouldn't be non-cached.
>>
>> 4. Once the frontend starts displaying it flips the buffers and backend
>> does *memcpy*
>> from the frontend-backend shared buffer into Weston's buffer. This means
>> no HW at the backend side touches the shared buffer.
>>
>> 5. I can see distorted picture.
>>
>> Previously I used setup with zero-copying, so then the picture becomes
>> more complicated
>> in terms of buffers and how those used by the backed, but anyways it
>> seems that the
>> very basic scenario with memory copying doesn't work for me.
>>
>> Using DMA API on frontend's side does help - no artifacts are seen.
>> This is why I'm thinking that this is related to frontend/kernel side
>> rather then to
>> the backend side. This is why I'm thinking this is related to caches and
>> trying to figure
>> out what can be done here instead of using DMA API.
>
> We actually never required to use cache flush in other PV protocol, so 
> I still don't understand why the PV DRM should be different here.
Well, you are right. But at the same time not flushing the buffer makes 
troubles,
so this is why I am trying to figure out what is wrong here.
>
> To me, it looks like that you are either missing some barriers
Barriers for the buffer? Not sure what you mean here. Even more, we have 
a use case
when the buffer is not touched by CPU in DomD and is solely owned by the HW.
> or the memory attributes are not correct.
Please see above - I do need your advice here
>
> Cheers,
>
Thank you very much for your time,
Oleksandr

[1] 
https://elixir.bootlin.com/linux/v5.0-rc2/source/arch/arm64/include/asm/pgtable.h#L414
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.