[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] question about running vm change its mem maxsize

Thank you for your reply

Daniel Stodden åé:
On Mon, 2007-03-19 at 20:21 +0800, tgh wrote:
Thank you for your reply

Daniel Stodden åé:
On Mon, 2007-03-19 at 09:20 +0800, tgh wrote:
I read the code of xc_linux_build() and xc_domain_setmaxmem(),and I am confused about how does "xm mem-max" change the max size of mem for a running VM

In xc_domain_setmaxmem() ,the XEN_DOMCTL_max_mem has been called ,and just does "d->max_pages = new_max;"
that variable determines the maximum size. the code verifies that the
new size won't be below the previous one, and therefore just needs to
readjust it.

it doesn't actually have to allocate memory. this is done on demand,
i.e. as soon as the domain references a new page frame within it's
virtual machine address space.

I see
while in the xc_linux_build(), before the vm boots up,the maxsize pfn is alloced in an array with fixsize, page_array = malloc(nr_pages * sizeof(unsigned long))
i suppose you misunderstood what that call really does. it's not
changing the maximum vm size, but allocating the initial number of pages
required to load the guest operating system image into. that's typically
much less than d->max_pages.

"page_array = malloc(nr_pages *sizeof(unsigned long))" in xc_linux_build() is 
not to allocate the physical memory to the VM,then which code or function allocate the 
phy-mem to the VM?

I am  confused about it

is see, i'm not sure anymore whether i understand your problem
so lets try going through that more slowly.

not the malloc() above, but the call to
xc_domain_memory_populate_physmap() allocates memory. domain memory is
organized in pages. those pages are allocated upon demand by the
software using it.
I search the code ,it seems that

and domain builder acts as bootloader ,and then guestos-linux will setup and control its own memory ,is it right? of course,any memory-map will incur xen to related map or so. but in the rational linux, it will know how many physical memory it owns and linux know that all the physical memory it owns is there In the xen ,guest-linux know how many physical memory it owns but doesnot know that all the physical memory it owns is there,for some of them is not to allocate,is it right? that is during guest-linux running,it will request some memory which it will get ,for the guest ,its memory (both its physical memory and its virtual memory)is dynamical allocated,while raditional linux has its physical memory when it boot up ,while gets virtual memory dynamically,is it right?

could you help me
thanks in advance

lets say you build a domain of size 256MB. that domain initially won't
need the whole 256MB to run. what it initially needs is memory where the
kernel is loaded. lets say that's 4MB or something. as soon as the guest
kernel is running, it will allocate any additionally needed memory by
itself and some magic mentioned in my previous replay.
it won't get more than those 256MB, unless root on dom0 is willing to
say so. it will learn about it, in a similar fashion to which a native
operating system gathers information about installed hardware from the
bios. that's the max_pages variable above.

before, domain0 will perform as a the boot loader, responsible for
loading the kernel image into main memory of the virtual machine, setup
a virtual cpu to aim at the kernels entry point, prepare virtual I/O
devices and whatever else it sees fit, and then fire up the domain.

in order to install the kernel, some of the guest vm memory must be
available. max_pages just says how much it could be, presently it's
still zero. the domain builder needs in our case the lets-say-4MB
mentioned above, at specific positions in the guest memory map, to copy
the kernel image into it (e.g. linux is typically mapped at adresses
above 1M). that is what xc_domain_memory_populate_physmap() is doing.

while xen allocates the memory, it won't load the kernel image. that's
way better done in user space. for that purpose, and about a ton of
others, dom0 is privileged to mmap() the page frames of arbitrary other
domains. if you follow the code, you will see calls to
xc_map_foreign_range(). e.g. xc_load_elf.c:loadelfimage().

but, in order to map these pages, the domain builder needs to know which
pages xen actually allocated. pages are numbered by the upper bits of
their location in physical memory. physical memory (as opposed to
per-domain 'pseudo-physical' memory, is identified by machine frame
numbers (mfn). the page_table argument to populate_physmap is requesting
exactly these.

one mfn is an unsigned long. for a domain with size nr_pages, you need
nr_pages * sizeof(unsigned long) bytes to hold the mfn table for the
then how does a running vm changes its mem maxsize ,especially for expanding its mem maxsize?
for an unprivileged guest os, there is no such call. similar in the way
your desktop machine (hopefully) won't call dell ordering more RAM
without your knowledge.

changing machine size is an administrative operation. it is defined
during VM creating, and potential subject to refinement by the
administrator on dom0.
I see it is domain0 who has the control interface to set and change the max-size of the VM memory

when VM is running, we can "xm mem-set " or "xm mem-max "in the domain0'console to change the vm memory

I have used it , and want to know what happen in the xen ,espacially in a paravirtVM

when max_pages is set, there's not much happening initially, as said
before. the domain is allowed to allocate more memory. that memory
remains unallocated up to the point where the domain actually allocates

i'm not entirely sure about how this is currently mapped in xen to
physical memory. i believe max_pages may be potentially different to the
amount of phyiscal memory announced to the domain. alternatives might
include memory hotplugging apis in the guest operating system. maybe
someone else can comment.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.