[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 07/12] x86/virt/guest/xen: Remove use of pgd_list from the Xen guest code



On Sat, 2015-06-13 at 11:49 +0200, Ingo Molnar wrote:
> xen_mm_pin_all()/unpin_all() are used to implement full guest instance
> suspend/restore. It's a stop-all method that needs to iterate through
> all allocated pgds in the system to fix them up for Xen's use.
> 
> This code uses pgd_list, probably because it was an easy interface.
> 
> But we want to remove the pgd_list, so convert the code over to walk
> all tasks in the system. This is an equivalent method.
> 
> (As I don't use Xen this is was only build tested.)

In which case it seems extra important to copy the appropriate
maintainers, which I've done here.

Ian.

> 
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
> Cc: Borislav Petkov <bp@xxxxxxxxx>
> Cc: Brian Gerst <brgerst@xxxxxxxxx>
> Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
> Cc: H. Peter Anvin <hpa@xxxxxxxxx>
> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Waiman Long <Waiman.Long@xxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
> ---
>  arch/x86/xen/mmu.c | 51 ++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 38 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index dd151b2045b0..70a3df5b0b54 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -853,15 +853,27 @@ static void xen_pgd_pin(struct mm_struct *mm)
>   */
>  void xen_mm_pin_all(void)
>  {
> -     struct page *page;
> +     struct task_struct *g, *p;
>  
> -     spin_lock(&pgd_lock);
> +     spin_lock(&pgd_lock); /* Implies rcu_read_lock() for the task list 
> iteration: */
>  
> -     list_for_each_entry(page, &pgd_list, lru) {
> -             if (!PagePinned(page)) {
> -                     __xen_pgd_pin(&init_mm, (pgd_t *)page_address(page));
> -                     SetPageSavePinned(page);
> +     for_each_process_thread(g, p) {
> +             struct mm_struct *mm;
> +             struct page *page;
> +             pgd_t *pgd;
> +
> +             task_lock(p);
> +             mm = p->mm;
> +             if (mm) {
> +                     pgd = mm->pgd;
> +                     page = virt_to_page(pgd);
> +
> +                     if (!PagePinned(page)) {
> +                             __xen_pgd_pin(&init_mm, pgd);
> +                             SetPageSavePinned(page);
> +                     }
>               }
> +             task_unlock(p);
>       }
>  
>       spin_unlock(&pgd_lock);
> @@ -967,19 +979,32 @@ static void xen_pgd_unpin(struct mm_struct *mm)
>   */
>  void xen_mm_unpin_all(void)
>  {
> -     struct page *page;
> +     struct task_struct *g, *p;
>  
> -     spin_lock(&pgd_lock);
> +     spin_lock(&pgd_lock); /* Implies rcu_read_lock() for the task list 
> iteration: */
>  
> -     list_for_each_entry(page, &pgd_list, lru) {
> -             if (PageSavePinned(page)) {
> -                     BUG_ON(!PagePinned(page));
> -                     __xen_pgd_unpin(&init_mm, (pgd_t *)page_address(page));
> -                     ClearPageSavePinned(page);
> +     for_each_process_thread(g, p) {
> +             struct mm_struct *mm;
> +             struct page *page;
> +             pgd_t *pgd;
> +
> +             task_lock(p);
> +             mm = p->mm;
> +             if (mm) {
> +                     pgd = mm->pgd;
> +                     page = virt_to_page(pgd);
> +
> +                     if (PageSavePinned(page)) {
> +                             BUG_ON(!PagePinned(page));
> +                             __xen_pgd_unpin(&init_mm, pgd);
> +                             ClearPageSavePinned(page);
> +                     }
>               }
> +             task_unlock(p);
>       }
>  
>       spin_unlock(&pgd_lock);
> +     rcu_read_unlock();
>  }
>  
>  static void xen_activate_mm(struct mm_struct *prev, struct mm_struct *next)



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.