[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fwd: [PATCH] RFC: initial libxl support for xenpaging



On Tue, 2012-02-28 at 13:17 +0000, George Dunlap wrote:
> On Fri, Feb 24, 2012 at 10:11 AM, Ian Campbell
<Ian.Campbell@xxxxxxxxxx> wrote:
> >> However, I'd say the main public "knobs" should be just consist of
two
> >> things:
> >> * xl mem-set memory-target.  This is the minimum amount of physical
RAM
> >> a guest can get; we make sure that the sum of these for all VMs
does not
> >> exceed the host capacity.
> >
> > Isn't this what we've previously called mem-paging-set? We defined
> > mem-set earlier as controlling the amount of RAM the guest _thinks_
it
> > has, which is different.
> 
> No, I thought mem-set was supposed to be the Simple Knob, that the
> user turned to say, "I don't care how you do it, just make the guest
> take X amount of RAM".  The whole thing with the pagingdelay and all
> that was how long and whether that Simple Knob would set the balloon
> target first, before resorting to sharing.  Since the user can't
> really control how much sharing happens, it makes sense to me for this
> Simple Knob also be the "minimum memory this VM should get if all
> extra pages from sharing suddenly disappear".

I think you might be correct. I suspect I wrote the above before I had
fully integrated sharing into my understanding/proposal (I took a few
iteration locally to get it "right")

I don't think this filtered into the actual interface proposal, or do
you see somewhere which it did (modulo the discussion below)?

> >> * xl sharing-policy [policy].  This tells the sharing system how to
use
> >> the "windfall" pages gathered from page sharing.
> >>
> >> Then internally, the sharing system should combine the "minimum
> >> footprint" with the number of extra pages and the policy to set the
> >> amount of memory actually used (via balloon driver or paging).
> >
> > This is an argument in favour of mem-footprint-set rather than
> > mem-paging set?
> >
> > Here is an updated version of my proposed interface which includes
> > sharing, I think as you described (modulo the use of mem-paging-set
> > where you said mem-set above).
> >
> > I also included "mem-paging-set manual" as an explicit thing with an
> > error on "mem-paging-set N" if you don't switch to manual mode. This
> > might be too draconian -- I'm not wedded to it.
> >
> > maxmem=X                        # maximum RAM the domain can ever
see
> > memory=M                        # current amount of RAM seen by the
> >                                # domain
> 
> What do you mean "seen by the domain"?

> If you mean "pages which aren't ballooned", then it looks an awful lot
> to me like you're (perhaps unintentionally) smuggling back into the
> interface "balloon target" and "paging target" (since "memory seen by
> the domain" would then always be equal to "balloon target", and
> "memory actually available" would always equal "paging target").  I
> thought the whole point was to hide all this complexity from the user,
> unless she wants to see it?
> 
> Or am I misunderstanding something?

Pages which the guest sees is not the same as ballooning target if the
guest has not met the target. So e.g. if a guest is using 10M and we do
"mem-set 6M" but the guest only balloons to 8M then the amount of RAM
"currently seen" by the guest is 8M not 6M.

In this situation we would eventually decide to use paging at which
point the actual RAM used by the guest would drop to 6M but as far as
the guest knows it is still using 8M, e.g. it "currently sees" 8M while
"memory actually available" is 6M. Also the balloon target remains 6M
because we expect the guest to keep on trying.

You are right though that for a well behaved guest there will be no
practical difference between the balloon target and the amount of RAM
seen by the guest, at least so long as ballooning is the mechanism by
which we expect guest to meet these targets.

I don't mention sharing in the above because all sharing does is reduce
the memory the guest "actually" has below what it "thinks" it has. So it
might be that we do "mem-set 6M" and that the guest makes it to 8M but
that 1M is shared. At which point we have "guest sees" == 8M and
"actual" == 8-1 == 7M. Eventually we would enable paging to reach the
desired "actual" == 6M, presumably by paging out an addition 1M.

If at this point all of the guests pages suddenly become unshared then
the pager will kick in and the amount of paged memory will presumably
grow to 2M, maintaining the actual target of "6M"

Another scenario would be where we mem-set 6M and the guest does
actually meet that target, so "guest sees" == "actual" == 6M and there
is no sharing or paging. If at this point we detect 1M worth of sharable
pages, so now "guest sees" == 6M but "actual" == "5M" and we have 1M of
spare memory to distribute as per the sharing policy.

If the policy is such that we need to guarantee to be able to give the
guest 6M again if it ends up unsharing everything then the sharing
policy would only allow us to use that memory for "ephemeral" purposes.

If the sharing policy does not guarantee that we can get that memory
back then we may find ourselves in a situation where "guest sees" == 6M
but "actual" == 5M, with the slack made up for by paging and not
sharing, despite having done "mem-set 6M". IMHO the user was effectively
asking for this (or at least acknowledging the possibility) when they
chose that sharing policy. In this case the paging target would still be
6M and the pager would, I presume, be actively trying to reduce the
amount of paged RAM, such that if some RAM becomes available it would
suck it up and move closer to "actual" == 6M.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.