[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed new "memory capacity claim" hypercall/feature



On 30/10/2012 16:13, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:

>> Okay, so why is tmem incompatible with implementing claims in the toolstack?
> 
> (Hmmm... maybe I could schedule the equivalent of a PhD qual exam
> for tmem with all the core Xen developers as examiners?)
> 
> The short answer is tmem moves memory capacity around far too
> frequently to be managed by a userland toolstack, especially if
> the "controller" lives on a central "manager machine" in a
> data center (Oracle's model).  The ebb and flow of memory supply
> and demand for each guest is instead managed entirely dynamically.

I don't know. I agree that fine-grained memory management is the duty of the
hypervisor, but it seems to me that the toolstack should be able to handle
admission control. It knows how much memory each existing guest is allowed
to consume at max, how much memory the new guest requires, how much memory
the system has total... Isn't the decision then simple? Tmem should be
fairly invisible to the toolstack, right?

 -- Keir

> The somewhat longer answer (and remember all of this is
> implemented and upstream in Xen and Linux today):
> 
> First, in the tmem model, each guest is responsible for driving
> its memory utilization (what Xen tools calls "current" and Xen
> hypervisor calls "tot_pages") as low as it can.  This is done
> in Linux with selfballooning.  At 50Hz (default), the guest
> kernel will attempt to expand or contract the balloon to match
> the guest kernel's current demand for memory.  Agreed, one guest
> requesting changes at 50Hz could probably be handled by
> a userland toolstack, but what about 100 guests?  Maybe...
> but there's more.
> 
> Second, in the tmem model, each guest is making tmem hypercalls
> at a rate of perhaps thousands per second, driven by the kernel
> memory management internals.  Each call deals with a single
> page of memory and each possibly may remove a page from (or
> return a page to) Xen's free list.  Interacting with a userland
> toolstack for each page is simply not feasible for this high
> of a frequency, even in a single guest.
> 
> Third, tmem in Xen implements both compression and deduplication
> so each attempt to put a page of data from the guest into
> the hypervisor may or may not require a new physical page.
> Only the hypervisor knows.
> 
> So, even on a single machine, tmem is tossing memory capacity
> about at a very very high frequency.  A userland toolstack can't
> possibly keep track, let alone hope to control it; that would
> entirely defeat the value of tmem.  It would be like requiring
> the toolstack to participate in every vcpu->pcpu transition
> in the Xen cpu scheduler.
> 
> Does that make sense and answer your question?
> 
> Anyway, I think the proposed "claim" hypercall/subop neatly
> solves the problem of races between large-chunk memory demands
> (i.e. large domain launches) and small-chunk memory demands
> (i.e. small domain launches and single-page tmem allocations).



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.