[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, don't panic on over-allocation



On Wed, Aug 01, 2007 at 02:49:19PM -0400, Jarod Wilson wrote:

> > Rather than that approach, a simple 'max_dom0_pages =
> > avail_domheap_pages()' is working just fine on both my 4G and 16G boxes,
> > with the 4G box now getting ~260MB more memory for dom0 and the 16G box
> > getting ~512MB more. Are there potential pitfalls here? 

Hi Jarod. Sorry for delayed reply.
Reviewing the Alex's mail, it might have used up xenheap at that time.
However now that the p2m table is allocated from domheap, 
memory for the p2m table would be counted.
It can be calculated by very roughly dom0_pages / PTRS_PER_PTE.
Here PTRS_PER_PTE = 2048 with 16kb page size, 1024 with 8KB page size...

the p2m table needs about  2MB for  4GB of dom0 with 16KB page size.
                    about  8MB for 16GB
                    about 43MB for 86GB 
                    about 48MB for 96GB 

(It counts only PTE pages and it supposes that dom0 memory is contiguous.
For more precise calculation it should count PMD, PGD and sparseness.
But its memory size would be only KB order. Even for 1TB dom0,
it would be about 1MB. So I ignored them.)

With max_dom0_pages = avail_domheap_pages() as you proposed,
we use xenheap for the p2m table, I suppose.
Xenheap size is at most 64MB and so precious.

How about this heurictic?
max_dom0_pages = avail_domheap_pages() - avail_domheap_pages() / PTRS_PER_PTE;

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.