[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Questions about xen memory management



Thanks for your reply.

2013/8/8 Ian Campbell <Ian.Campbell@xxxxxxxxxx>
On Thu, 2013-08-08 at 14:59 +0800, Josh Zhao wrote:
> Hi,
> I am reading the arm MM initial code, there are 2 questions I can't
> understand:
> 1)  Both init_xenheap_pages() and init_domheap_pages() will invoke
> init_heap_pages() to initialize pages management.  But there is no
> flag to know those pages are belonged to xenheap or domheap.  Are
> xenhelp and domheap in the same zone?

There are two models for xen vs. domheap, and therefore two version of
init_*heap_pages.

The original model is the split heap model, which is used on platforms
which have smaller virtual address spaces. e.g. arm32, for the moment
arm64 (but I am about to switch to the second model) and historically
the x86_32 platform. This is because as Andy notes xenheap must always
be mapped while domheap is not (and cannot be on these platforms),
domheap is mapped only on demand (map_domain_page).
In this case init_xenheap_pages contains:
    /*
     * Yuk! Ensure there is a one-page buffer between Xen and Dom zones, to
     * prevent merging of power-of-two blocks across the zone boundary.
     */
    if ( ps && !is_xen_heap_mfn(paddr_to_pfn(ps)-1) )
        ps += PAGE_SIZE;
    if ( !is_xen_heap_mfn(paddr_to_pfn(pe)) )
        pe -= PAGE_SIZE;


Yes, is_xen_heap_mfn() function can whether the physical address is in xenheap or not.  But how the init_heap_pages() knows the page is belonged to xenheap or domheap. Because the phys_to_nid()  always return 0;

 
The second model is used on systems which have large enough virtual
address to map all of RAM, currently x86_64 and soon arm64. In this case
there is only one underlying pool of memory and the split is more
logical than real, although it is tracked by setting PGC_xen_heap when
allocating xenheap pages. In this case domheap is actually always mapped
but you still must use map_domain_page to access it (so common code work
on both models)

There is actually an extension to the second model for systems which
have enourmous amounts of physical memory (e.g. >5TB on x86_64) which
brings back the xen/domheap split but in a different way to the first
model. In this case the split is implemented in alloc_xenheap_pages by
consulting xenheap_bits to restrict the allocations to only the direct
mapped region.


As setup_xenheap_mappings() function is one to one mapping  xenheap physical to 1G--2G virtual space range. I am wondering what the Xenheap virtual space(1G-2G) is used for ?   Because I can allocate  xen pages from xenheap by alloc_xenheap_pages()  and I can track pages by frametable.

 
> 2) What's the vmap.c used for ?

To map arbitrary physical addresses into the virtual address space.

>  I saw that only the ioremap will use it.  If so, it seems no needs to
> allocate pages to fill the all VMAP range (256M -  1G) by
> alloc_domheap_page().

This is allocating the page table pages up front which simplifies the
creation of mappings. In theory this could be done on demand but I
suppose it is simpler to do it up front.


In VM_init():

for ( i = 0, va = (unsigned long)vm_bitmap; i < nr; ++i, va += PAGE_SIZE )
    {
        struct page_info *pg = alloc_domheap_page(NULL, 0);

        map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR);
        clear_page((void *)va);
    }

It seems not only allocate page table pages ,but also allocates vmap pages by alloc_domheap_page().



 

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.