[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate solutions



> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxxxxx]
> Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of 
> problem and alternate
> solutions
>
> On 14/01/13 18:18, Dan Magenheimer wrote:
> >>>>> i.e. d->max_pages is fixed for the life of the domain and
> >>>>> only d->tot_pages varies; i.e. no intelligence is required
> >>>>> in the toolstack.  AFAIK, the distinction between current_maxmem
> >>>>> and lifetime_maxmem was added for Citrix DMC support.
> [snip]
> > Yes, understood.  Ian please correct me if I am wrong, but I believe
> > your proposal (at least as last stated) does indeed, in some cases,
> > set d->max_pages less than or equal to d->tot_pages.  So AFAICT the
> > change does very much have a bearing on the discussion here.
> 
> Strictly speaking, no, it doesn't have to do with what we're proposing.
> To impement "limit-and-check", you only need to set d->max_pages to
> d->tot_pages.  This capability has been possible for quite a while, and
> was not introduced to support Citrix's DMC.
> 
> > Exactly.  So, in your/Ian's model, you are artificially constraining a
> > guest's memory growth, including any dynamic allocations*.  If, by bad luck,
> > you do that at a moment when the guest was growing and is very much in
> > need of that additional memory, the guest may now swapstorm or OOM, and
> > the toolstack has seriously impacted a running guest.  Oracle considers
> > this both unacceptable and unnecessary.
> 
> Yes, I realized the limitation to dynamic allocation from my discussion
> with Konrad.  This is a constraint, but it can be worked around.

Please say more about how you think it can be worked around.

> Even so you rather overstate your case.  Even in the "reservation
> hypercall" model, if after the "reservation" there's not enough memory
> for the guest to grow, the same thing will happen.  If Oracle really
> considered this "unacceptable and unnecessary", then the toolstack
> should have a model of when this is likely to happen and keep memory
> around for such a contingency.

Hmmm... I think you are still missing the point of how
Oracle's dynamic allocations work, as evidenced by the
fact that "Keeping memory around for such a contingency"
makes no sense at all in the Oracle model.  And the
"not enough memory for the guest to grow" only occurs in
the Oracle model when physical memory is completely exhausted
across all running domains in the system (i.e. max-of-sums
not sum-of-maxes), which is a very different constraint.
 
> > So, I think it is very fair (not snide) to point out that a change was
> > made to the hypervisor to accommodate your/Ian's memory-management model,
> > a change that Oracle considers unnecessary, a change explicitly
> > supporting your/Ian's model, which is a model that has not been
> > implemented in open source and has no clear (let alone proven) policy
> > to guide it.  Yet you wish to block a minor hypervisor change which
> > is needed to accommodate Oracle's shipping memory-management model?
> 
> We've been over this a number of times, but let me say it again. Whether
> a change gets accepted has nothing to do with who suggested it, but
> whether the person suggesting it can convince the community that it's
> worthwhile.  Fujitsu-Seimens implemented cpupools, which is a fairly
> invasive patch, in order to support their own business models; while the
> XenClient team has had a lot of resistance to getting v4v upstreamed,
> even though their product depends on it.  My max_pages change was
> accepted (along with many others), but many others have also been
> rejected.  For example, my "domain runstates" patch was rejected, and is
> still being carried in the XenServer patchqueue several years later.
> 
> If you have been unable to convince the community that your patch is
> necessary, then either:
> 1. It's not necessary / not ready in its current state
> 2. You're not very good at being persuasive
> 3. We're too closed-minded / biased whatever to understand it
> 
> You clearly believe #3 -- you began by accusing us of being
> closed-minded (i.e., "stuck in a static world", &c), but have since
> changed to accusing us of being biased.  You have now made this
> accusation several times, in spite of being presented evidence to the
> contrary each time.  This evidence has included important Citrix patches
> that have been rejected, patches from other organizations that have been
> accepted, and also evidence that most of the people opposing your patch
> (including Jan, IanC, IanJ, Keir, Tim, and Andres) don't know anything
> about DMC and have no direct connection with XenServer.

For the public record, I _partially_ believe #3.  I would restate it
as: You (and others with the same point-of-view) have a very fixed
idea of how memory-management should work in the Xen stack.  This
idea is not really implemented, AFAICT you haven't thought through
the policy issues, and you haven't yet realized the challenges
I believe it will present in the context of Oracle's dynamic model
(since AFAIK you have not understood tmem and selfballooning though
it is all open source upstream in Xen and Linux).

I fully believe if you fully understood those challenges and the
shipping implementation of Oracle's dynamic model, your position
would be different.  So this has been a long long education process
for all of us.

"Closed-minded" and "biased" are very subjective terms and have
negative connotations, so I will let others interpret my statements
above and will plead guilty only if the court of public opinion
deems I "clearly believe #3".
 
> For my part, I'm willing to believe #2, which is why I suggested that
> you ask someone else to take up the cause, and why I am glad that Konrad
> has joined the discussion.

I'm glad too. :-)

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.