[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RESEND PATCH v4 0/9] mm: Use vm_map_pages() and vm_map_pages_zero() API



Hi Andrew,

On Tue, Mar 19, 2019 at 7:47 AM Souptick Joarder <jrdr.linux@xxxxxxxxx> wrote:
>
> Previouly drivers have their own way of mapping range of
> kernel pages/memory into user vma and this was done by
> invoking vm_insert_page() within a loop.
>
> As this pattern is common across different drivers, it can
> be generalized by creating new functions and use it across
> the drivers.
>
> vm_map_pages() is the API which could be used to map
> kernel memory/pages in drivers which has considered vm_pgoff.
>
> vm_map_pages_zero() is the API which could be used to map
> range of kernel memory/pages in drivers which has not considered
> vm_pgoff. vm_pgoff is passed default as 0 for those drivers.
>
> We _could_ then at a later "fix" these drivers which are using
> vm_map_pages_zero() to behave according to the normal vm_pgoff
> offsetting simply by removing the _zero suffix on the function
> name and if that causes regressions, it gives us an easy way to revert.
>
> Tested on Rockchip hardware and display is working fine, including talking
> to Lima via prime.
>
> v1 -> v2:
>         Few Reviewed-by.
>
>         Updated the change log in [8/9]
>
>         In [7/9], vm_pgoff is treated in V4L2 API as a 'cookie'
>         to select a buffer, not as a in-buffer offset by design
>         and it always want to mmap a whole buffer from its beginning.
>         Added additional changes after discussing with Marek and
>         vm_map_pages() could be used instead of vm_map_pages_zero().
>
> v2 -> v3:
>         Corrected the documentation as per review comment.
>
>         As suggested in v2, renaming the interfaces to -
>         *vm_insert_range() -> vm_map_pages()* and
>         *vm_insert_range_buggy() -> vm_map_pages_zero()*.
>         As the interface is renamed, modified the code accordingly,
>         updated the change logs and modified the subject lines to use the
>         new interfaces. There is no other change apart from renaming and
>         using the new interface.
>
>         Patch[1/9] & [4/9], Tested on Rockchip hardware.
>
> v3 -> v4:
>         Fixed build warnings on patch [8/9] reported by kbuild test robot.
>
> Souptick Joarder (9):
>   mm: Introduce new vm_map_pages() and vm_map_pages_zero() API
>   arm: mm: dma-mapping: Convert to use vm_map_pages()
>   drivers/firewire/core-iso.c: Convert to use vm_map_pages_zero()
>   drm/rockchip/rockchip_drm_gem.c: Convert to use vm_map_pages()
>   drm/xen/xen_drm_front_gem.c: Convert to use vm_map_pages()
>   iommu/dma-iommu.c: Convert to use vm_map_pages()
>   videobuf2/videobuf2-dma-sg.c: Convert to use vm_map_pages()
>   xen/gntdev.c: Convert to use vm_map_pages()
>   xen/privcmd-buf.c: Convert to use vm_map_pages_zero()

Is it fine to take these patches into mm tree for regression ?

>
>  arch/arm/mm/dma-mapping.c                          | 22 ++----
>  drivers/firewire/core-iso.c                        | 15 +---
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.c        | 17 +----
>  drivers/gpu/drm/xen/xen_drm_front_gem.c            | 18 ++---
>  drivers/iommu/dma-iommu.c                          | 12 +---
>  drivers/media/common/videobuf2/videobuf2-core.c    |  7 ++
>  .../media/common/videobuf2/videobuf2-dma-contig.c  |  6 --
>  drivers/media/common/videobuf2/videobuf2-dma-sg.c  | 22 ++----
>  drivers/xen/gntdev.c                               | 11 ++-
>  drivers/xen/privcmd-buf.c                          |  8 +--
>  include/linux/mm.h                                 |  4 ++
>  mm/memory.c                                        | 81 
> ++++++++++++++++++++++
>  mm/nommu.c                                         | 14 ++++
>  13 files changed, 134 insertions(+), 103 deletions(-)
>
> --
> 1.9.1
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.