[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed new "memory capacity claim" hypercall/feature



Hi, 

At 16:21 -0700 on 29 Oct (1351527686), Dan Magenheimer wrote:
> > > The hypervisor must also enforce some semantics:  If an allocation
> > > occurs such that a domain's tot_phys_pages would equal or exceed
> > > d.tot_claimed_pages, then d.tot_claimed_pages becomes "unset".
> > > This enforces the temporary nature of a claim:  Once a domain
> > > fully "occupies" its claim, the claim silently expires.
> > 
> > Why does that happen?  If I understand you correctly, releasing the
> > claim is something the toolstack should do once it knows it's no longer
> > needed.
> 
> I haven't thought this all the way through yet, but I think this
> part of the design allows the toolstack to avoid monitoring the
> domain until "total_phys_pages" reaches "total_claimed" pages,
> which should make the implementation of claims in the toolstack
> simpler, especially in many-server environments.

I think the toolstack has to monitor the domain for that long anyway,
since it will have to unpause it once it's built.  Relying on an
implicit release seems fragile -- if the builder ends up using only
(total_claimed - 1) pages, or temporarily allocating total_claimed and
then releasing some memory, things could break.

> > I think it needs a plan for handling restricted memory allocations.
> > For example, some PV guests need their memory to come below a
> > certain machine address, or entirely in superpages, and certain
> > build-time allocations come from xenheap.  How would you handle that
> > sort of thing?
> 
> Good point.  I think there's always been some uncertainty about
> how to account for different zones and xenheap... are they part of the
> domain's memory or not?

Xenheap pages are not part of the domain memory for accounting purposes;
likewise other 'anonymous' allocations (that is, anywhere that
alloc_domheap_pages() & friends are called with a NULL domain pointer).
Pages with restricted addresses are just accounted like any other
memory, except when they're on the free lists.

Today, toolstacks use a rule of thumb of how much extra space to leave
to cover those things -- if you want to pre-allocate them, you'll have
to go through the hypervisor making sure _all_ memory allocations are
accounted to the right domain somehow (maybe by generalizing the
shadow-allocation pool to cover all per-domain overheads).  That seems
like a useful side-effect of adding your new feature.

> Deserves some more thought...  if you can enumerate all such cases,
> that would be very helpful (and probably valuable long-term
> documentation as well).

I'm afraid I can't, not without re-reading all the domain-builder code
and a fair chunk of the hypervisor, so it's up to you to figure it out.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.