[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Vanilla Linux and has_foreign_mapping



Hi there,

Take a look at changeset 488 in
http://xenbits.xensource.com/linux-2.6.18-xen.hg

There you will see that we now have a new page flag (_PAGE_IO) that we apply
to any PTE which maps I/O pages or 'foreign' pages. We use this to avoid
pseudophysical<->machine translations when getting/setting ptes, because
such translations are not generally valid for such pages, but equally it may
obviate the need for the has_foreign_mappings flag. This is because we can
now have pte_pfn() return an invalid pfn for foreign mappings based on the
_PAGE_IO flag in the pte, rather than by the roundabout logic implemented in
pfn_to_local_mfn(). The latter relies on us keeping the pagetables pinned --
so if we no longer rely on it then we no longer need to forcibly keep things
pinned via the has_foreign_mappings flag.

Unfortunately I had to keep the has_foreign_mappings flag for other reasons
in the 2.6.18 tree: gntdev and blktap device drivers use it to forcibly keep
ptes pinned which contain grant mappings. This could probably be fixed
within those drivers by having them pin just the pte pages that contain
grant mappings. Then even on early-unpin (which has foreign_mappings
currently defeats) we would still have the necessary pte-containing pages
pinned even though the pgd-containin g page gets unpinned.

It's all a bit of a pain I'm afraid. :-(

 -- Keir

On 20/4/08 22:19, "Michael Abd-El-Malek" <mabdelmalek@xxxxxxx> wrote:

> Hello,
> 
> I'm trying to add support to Linux 2.6.25 for the "has_foreign_mappings" MMU
> context flag.  Xen's Linux 2.6.18 tree uses this flag, so that page tables are
> properly disposed of when an application exits when it has foreign mappings.
> See:
> http://lists.xensource.com/archives/html/xen-devel/2006-08/msg00038.html
> 
> Here is my attempt:
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 2a054ef..3e51897 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -633,8 +633,13 @@ void xen_exit_mmap(struct mm_struct *mm)
> spin_lock(&mm->page_table_lock);
> 
> /* pgd may not be pinned in the error exit path of execve */
> - if (PagePinned(virt_to_page(mm->pgd)))
> -  xen_pgd_unpin(mm->pgd);
> + if (PagePinned(virt_to_page(mm->pgd))) {
> +        if (mm->context.has_foreign_mappings) {
> +            printk("%s: because of has_foreign_mappings, delaying
> unpinning\n", 
> __FUNCTION__);
> +        } else {
> +            xen_pgd_unpin(mm->pgd);
> +        }
> +    }
> 
> spin_unlock(&mm->page_table_lock);
>   }
> diff --git a/include/asm-x86/mmu.h b/include/asm-x86/mmu.h
> index efa962c..7194698 100644
> --- a/include/asm-x86/mmu.h
> +++ b/include/asm-x86/mmu.h
> @@ -18,6 +18,9 @@ typedef struct {
> int size;
> struct mutex lock;
> void *vdso;
> +#ifdef CONFIG_XEN
> + int has_foreign_mappings;
> +#endif
>   } mm_context_t;
> 
>   #ifdef CONFIG_SMP
> 
> Unfortunately, I got the following kernel crash on process exit:
> 
> BUG: unable to handle kernel paging request at ebdae008
> IP: [<c01157f9>] pgd_mop_up_pmds+0x6a/0xd8
> *pdpt = 000000007f494027
> Oops: 0003 [#1] PREEMPT SMP
> Modules linked in: efsvm(F) nfs lockd sunrpc dm_snapshot dm_mirror dm_mod
> 
> Pid: 5565, comm: a.out Tainted: GF        (2.6.25 #9)
> EIP: 0061:[<c01157f9>] EFLAGS: 00010246 CPU: 0
> EIP is at pgd_mop_up_pmds+0x6a/0xd8
> ...
> Call Trace:
>   [<c01158bf>] pgd_free+0x8/0x19
>   [<c011fca0>] __mmdrop+0x16/0x2a
>   [<c01244bc>] do_exit+0x1b3/0x569
>   [<c01248d5>] do_group_exit+0x63/0x7a
>   [<c0107066>] syscall_call+0x7/0xb
> 
> Has anyone else implemented this functionality in the mainline Linux tree?
> Any 
> thoughts?
> 
> Thanks,
> Mike
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.