[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 2/9] mm: add apply_to_page_range_batch()



. snip..
>  static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
>                                    unsigned long addr, unsigned long end,
> -                                  pte_fn_t fn, void *data)
> +                                  pte_batch_fn_t fn, void *data)
>  {
>       pte_t *pte;
>       int err;
> -     pgtable_t token;
>       spinlock_t *uninitialized_var(ptl);
>  
>       pte = (mm == &init_mm) ?
> @@ -1940,25 +1939,17 @@ static int apply_to_pte_range(struct mm_struct *mm, 
> pmd_t *pmd,
>       BUG_ON(pmd_huge(*pmd));
>  
>       arch_enter_lazy_mmu_mode();
> -
> -     token = pmd_pgtable(*pmd);
> -
> -     do {
> -             err = fn(pte++, addr, data);
> -             if (err)
> -                     break;
> -     } while (addr += PAGE_SIZE, addr != end);
> -
> +     err = fn(pte, (end - addr) / PAGE_SIZE, addr, data);
>       arch_leave_lazy_mmu_mode();
>  
>       if (mm != &init_mm)
> -             pte_unmap_unlock(pte-1, ptl);
> +             pte_unmap_unlock(pte, ptl);

That looks like a bug fix as well? Did this hit us before the change or was
it masked by the fact that the code never go to here?

>       return err;
>  }

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.