[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/9] mm: Introduce new vm_insert_range API
On Sun, Dec 02, 2018 at 11:49:44AM +0530, Souptick Joarder wrote: > Previouly drivers have their own way of mapping range of > kernel pages/memory into user vma and this was done by > invoking vm_insert_page() within a loop. > > As this pattern is common across different drivers, it can > be generalized by creating a new function and use it across > the drivers. > > vm_insert_range is the new API which will be used to map a > range of kernel memory/pages to user vma. > > This API is tested by Heiko for Rockchip drm driver, on rk3188, > rk3288, rk3328 and rk3399 with graphics. > > Signed-off-by: Souptick Joarder <jrdr.linux@xxxxxxxxx> > Reviewed-by: Matthew Wilcox <willy@xxxxxxxxxxxxx> > Tested-by: Heiko Stuebner <heiko@xxxxxxxxx> > --- > include/linux/mm_types.h | 3 +++ > mm/memory.c | 38 ++++++++++++++++++++++++++++++++++++++ > mm/nommu.c | 7 +++++++ > 3 files changed, 48 insertions(+) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 5ed8f62..15ae24f 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -523,6 +523,9 @@ extern void tlb_gather_mmu(struct mmu_gather *tlb, struct > mm_struct *mm, > extern void tlb_finish_mmu(struct mmu_gather *tlb, > unsigned long start, unsigned long end); > > +int vm_insert_range(struct vm_area_struct *vma, unsigned long addr, > + struct page **pages, unsigned long page_count); > + This seem to belong to include/linux/mm.h, near vm_insert_page() > static inline void init_tlb_flush_pending(struct mm_struct *mm) > { > atomic_set(&mm->tlb_flush_pending, 0); > diff --git a/mm/memory.c b/mm/memory.c > index 15c417e..84ea46c 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1478,6 +1478,44 @@ static int insert_page(struct vm_area_struct *vma, > unsigned long addr, > } > > /** > + * vm_insert_range - insert range of kernel pages into user vma > + * @vma: user vma to map to > + * @addr: target user address of this page > + * @pages: pointer to array of source kernel pages > + * @page_count: number of pages need to insert into user vma > + * > + * This allows drivers to insert range of kernel pages they've allocated > + * into a user vma. This is a generic function which drivers can use > + * rather than using their own way of mapping range of kernel pages into > + * user vma. > + * > + * If we fail to insert any page into the vma, the function will return > + * immediately leaving any previously-inserted pages present. Callers > + * from the mmap handler may immediately return the error as their caller > + * will destroy the vma, removing any successfully-inserted pages. Other > + * callers should make their own arrangements for calling unmap_region(). > + * > + * Context: Process context. Called by mmap handlers. > + * Return: 0 on success and error code otherwise > + */ > +int vm_insert_range(struct vm_area_struct *vma, unsigned long addr, > + struct page **pages, unsigned long page_count) > +{ > + unsigned long uaddr = addr; > + int ret = 0, i; > + > + for (i = 0; i < page_count; i++) { > + ret = vm_insert_page(vma, uaddr, pages[i]); > + if (ret < 0) > + return ret; > + uaddr += PAGE_SIZE; > + } > + > + return ret; > +} > +EXPORT_SYMBOL(vm_insert_range); > + > +/** > * vm_insert_page - insert single page into user vma > * @vma: user vma to map to > * @addr: target user address of this page > diff --git a/mm/nommu.c b/mm/nommu.c > index 749276b..d6ef5c7 100644 > --- a/mm/nommu.c > +++ b/mm/nommu.c > @@ -473,6 +473,13 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned > long addr, > } > EXPORT_SYMBOL(vm_insert_page); > > +int vm_insert_range(struct vm_area_struct *vma, unsigned long addr, > + struct page **pages, unsigned long page_count) > +{ > + return -EINVAL; > +} > +EXPORT_SYMBOL(vm_insert_range); > + > /* > * sys_brk() for the most part doesn't need the global kernel > * lock, except when an application is doing something nasty > -- > 1.9.1 > -- Sincerely yours, Mike. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |