[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-API] Re: FW: [Xen-users] XCP Memory static/dynamic and overcommit



Hi David

Hi all, So I have been playing around with XCP and the static/dynamic
memory parameters.  I have a few behavioral questions I would like to
pin down:

-Is the static-max quantity of free memory on the host always
required before the guest vm can be started? I assume so since you
don't know a-priori if the guest you are booting supports Xen or not.
But if this is true, what is the use of static-min?  When I boot a
guest does it just determine the highest memory it can take in the
range of static-min to static-max, given any ability to shrink other
guests that have Xen-enabled kernels?

XCP supports the ability to dynamically add and remove memory
from a running guest, without rebooting that guest.

In order to add or remove memory, XCP relies on the action of
a co-operating balloon driver running within each guest. XCP
can decrease guest memory by asking the balloon driver to return
memory to Xen, or increase memory by asking it to re-allocate
memory from Xen.

The balloon driver achieves this by maintaining a memory
"balloon" within the guest's physical memory space. While pages
are within the balloon, Xen is able to use those pages for other
guests on the same host.

To "inflate" the balloon (and thus reduce the apparent size of
a guest) a balloon driver will use an OS-specific memory
allocation function to allocate pinned physical memory pages
from the guest OS, thus artificially increasing memory pressure
within the guest. It can then return those pages to Xen (to be
reused by other guests).

To "deflate" the balloon (and thus increase the apparent size of
a guest) a balloon driver will allocate memory pages from Xen,
and then use an OS-specific memory deallocation function to
return the memory pages back to the guest OS, thus decreasing
memory pressure within the guest.

XCP provides four memory configuration fields through which
administrators can control this behaviour:

 * static-min
 * dynamic-min
 * dynamic-max
 * static-max

The fields static-{min,max} act as *hard* lower and upper
bounds for a guest's memory. For a running guest:
 * it's not possible to assign the guest more memory than
   static-max without first shutting down the guest.
 * it's not possible to assign the guest less memory than
   static-min without first shutting down the guest.

The fields dynamic-{min,max} act as *soft* lower and upper
bounds for a guest's memory. It's possible to change these
fields even when a guest is running.

The dynamic range must lie wholly within the static range. To
put it another way, XCP at all times ensures that:

  static-min <= dynamic-min <= dynamic-max <= static-max

At all times, XCP will attempt to keep a guest's memory usage
between dynamic-min and dynamic-max.

If dynamic-min = dynamic-max, then XCP will attempt to keep
a guest's memory allocation at a constant size.

If dynamic-min < dynamic-max, then XCP will attempt to give
the guest as much memory as possible, while keeping the guest
within dynamic-min and dynamic-max.

If there is enough memory on a given host to give all resident
guests dynamic-max, then XCP will attempt do so.

If there is not enough memory to give all guests dynamic-max,
then XCP will ask each of the guests (on that host) to use
an amount of memory that is the same *proportional* distance
between dynamic-min and dynamic-max.

XCP will refuse to start guests if starting those guests would
cause the sum of all the dynamic-min values to exceed the total
host memory (taking into account various memory overheads).

-For guests running xen-enabled kernels, wouldn't it actually be
better if dynamic-max could be higher than static-max?  IE you could
imagine that you have a lot of VMs running on one host, to start new
ones you need to have them boot with a small amount of physical
memory (say 256MB), but if any one of them is under memory pressure
you would like it to be able to grow up to some cap, say 1024MB or
some such, pending free memory being available to pull from other
guests, or just plain free on the host.

As mentioned above, at all times XCP ensures that:

  static-min <= dynamic-min <= dynamic-max <= static-max

-I have a host with 4GB of memory, I configured 3 debian lenny
guests all running the xen-enabled kernel, they were set to have
static max of 3GB, static min of 256MB, dynamic-max of 512MB,
dynamic-min of 256MB. I logged in to one of them and put significant
memory pressure on it, hoping I could get guest's memory to grow
while the others were idle.  However my experience was the guest's
would set their memory directly at whatever dynamic-max is set to.

This is expected. In this case, the host has more than enough
memory to assign all guests memory equal to their dynamic-max.

Is there any way for the guests to adjust their memory footprint on
the fly based on their memory pressure?  IE what I'd really like is:

--boot-memory: the quantity of memory used to boot the guest,
similar to static-max --dynamic-max: the largest quantity of memory
the guest could potentially grow to, this could be greater than
boot-memory

In principle, there's no reason why this couldn't be done.
Indeed, I agree such a system would be highly desirable.

However, in practice I believe it's quite difficult for a
number of reasons:

One problem is that modern operating systems attempt to maximise
their performance by using a large proportion (if not all) of
their spare memory for buffers. It's hard to know by inspecting
a VM just how much of its memory can be reclaimed without
hurting performance.

I believe it's fairly easy to tell when a guest is in trouble,
(by inspecting the page fault rate) but rather more difficult
to tell how much additional memory is required to lift a guest
out of trouble and back into its comfort zone.

Another problem is that it's difficult to react quickly enough
when a previously-starved guest has a sudden instantaneous
requirement for more memory. What if the memory is not
available?

Finally, I suspect it may be difficult (but not impossible) to
come up with measures of memory pressure that work well across
different operating system families running on the same host,
without inadventantly producing a system with bias towards a
particular OS family.

And then through a combination of ballooning, etc, for kernel
supported guests you could keep the actual dynamic memory as low as
possible (without damaging performance), but allow other guests that
need to temporarily grow/shrink to do so.  This would all need some
sort of fairness policy etc.  Is anything like this currently
enabled in XCP? And if not, what components exist, or would be needed
for something like this?

This isn't currently enabled in XCP, but there's no technical
reason why someone couldn't build a plug-in for this.

As mentioned above, XCP currently implements a proportional
policy w.r.t. to determining how much memory (between dynamic-
min and dynamic-max) to assign to a guest.

This policy is actually implemented not by xapi, but by a
special daemon that runs in domain 0 - namely the "ballooning"
or "squeezing" daemon ("squeezed" for short).

It would be fairly easy to replace this daemon within another
one, to implement almost any policy you could imagine.

Assuming it's possible to find a good measure of guest memory
pressure (presumably by overcoming the problems listed above),
then it would certainly be possible to write a daemon to
implement the policy.

If you're interested in writing an alternative policy, then
the first place to have a look is at the current policy
implementation in ocaml/xenops/squeeze_*.ml.

I hope you find these answers helpful. If not, or if you have
more questions then by all means feel free to ask them on the
"xen-api" list and we'll try to help. :)

All the best

Jonathan

Jonathan Knowles
Citrix Systems

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.