[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] How does munmap() call Xen to decrease the reference counter when releasing a foreign page?


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Chen Haogang <haogangchen@xxxxxxxxx>
  • Date: Tue, 21 Jul 2009 12:31:52 +0800
  • Delivery-date: Tue, 21 Jul 2009 05:51:56 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; b=DrM63xgo7TNeyHEdFoy5bdgdAQNuK93hcMWsyxUCZ/QCm5+1bEdO/QINNKq1CcCwV9 ASJBBF9L+u58kM93MGCcd9jm5Okpaomo+N9EqDosF9lyhGxuTXH97pGxy0HmXqX+E2m7 0Z28VkM3KOy0slO5WMxRKKH4NZJkrQKTaW9Zo=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

The ioemu-qemu-xen code uses xc_map_foreign_batch() to map pages from
a HVM domain. When the "map cache" goes full (on hash collision, more
precisely), it calls munmap() to release previously mapped address
space.

In Xen 3.3.0, xc_map_foreign_batch() finally calls
HYPERVISOR_mmu_update(), which traps to Xen to increase the reference
counter of the mapped machine page. However, it seems to me that
munmap() just silently set PTEs to 0 in Dom0's kernel, without
notifying Xen to release the original page. Did I miss something? If
so, where is the code that inform Xen to decrease the reference
counter when a foreign page is being unmapped via munmap()?

The following is the call path of munmap() in Dom0's kernel:

do_munmap
        unmap_region
                unmap_vmas
                        unmap_page_range
                                zap_pud_range
                                 zap_pmd_range
                                  zap_pte_range
                                        ptep_get_and_clear_full

in include/asm-x86_64/mach-xen/asm/pgtable.h

274 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
275 {
276         pte_t pte = *ptep;
277         if (!pte_none(pte)) {
278                 if (mm != &init_mm)
279                         pte = __pte_ma(xchg(&ptep->pte, 0));
280                 else
281                         HYPERVISOR_update_va_mapping(addr, __pte(0), 0);
282         }
283         return pte;
284 }
285
286 static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
unsigned long addr, pte_t *ptep, int full)
287 {
288         if (full) {
289                 pte_t pte = *ptep;
290                 if (mm->context.pinned)
291                         xen_l1_entry_update(ptep, __pte(0));
292                 else
293                         *ptep = __pte(0);
294                 return pte;
295         }
296         return ptep_get_and_clear(mm, addr, ptep);
297 }

In the above code, I believe xen_l1_entry_update on line 291 should
not be called since qemu's mm->context.pinned is false. Also,
HYPERVISOR_update_va_mapping (line 281) is not called because qemu's
mm != init_mm. All the other paths simply set the PTE through
assignment or exchange.

-- 
Best regards,
Chen Haogang

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.