[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] domain creation vs querying free memory (xend and xl)



At 12:33 -0700 on 02 Oct (1349181195), Dan Magenheimer wrote:
> > From: Tim Deegan [mailto:tim@xxxxxxx]
> > Subject: Re: [Xen-devel] domain creation vs querying free memory (xend and 
> > xl)
> > 
> > At 13:03 -0700 on 01 Oct (1349096617), Dan Magenheimer wrote:
> > > Bearing in mind that I know almost nothing about xl or
> > > the tools layer, and that, as a result, I tend to look
> > > for hypervisor solutions, I'm thinking it's not possible to
> > > solve this without direct participation of the hypervisor anyway,
> > > at least while ensuring the solution will successfully
> > > work with any memory technology that involves ballooning
> > > with the possibility of overcommit (i.e. tmem, page sharing
> > > and host-swapping, manual ballooning, PoD)...  EVEN if the
> > > toolset is single threaded (i.e. only one domain may
> > > be created at a time, such as xapi). [1]
> > 
> > TTBOMK, Xapi actually _has_ solved this problem, even with ballooning
> > and PoD.  I don't know if they have any plans to support sharing,
> > swapping or tmem, though.
> 
> Is this because PoD never independently increases the size of a domain's
> allocation? 

AIUI xapi uses the domains' maximum allocations, centrally controlled,
to place an upper bound on the amount of guest memory that can be in
use.  Within those limits there can be ballooning activity.  But TBH I
don't know the details.

> > Adding a 'reservation' of free pages that may only be allocated by a
> > given domain should be straightforward enough, but I'm not sure it helps
> 
> It absolutely does help.  With tmem (and I think with paging), the
> total allocation of a domain may be increased without knowledge by
> the toolset.

But not past the domains' maximum allowance, right?  That's not the case
with paging, anyway.

> > much.  In the 'balloon-to-fit' model where all memory is already
> > allocated to some domain (or tmem), some part of the toolstack needs to
> > sort out freeing up the memory before allocating it to another VM.
> 
> By balloon-to-fit, do you mean that all RAM is occupied?  Tmem
> handles the "sort out freeing up the memory" entirely in the
> hypervisor, so the toolstack never knows.

Does tmem replace ballooning/sharing/swapping entirely?  I thought they
could coexist.  Or, if you jut mean that tmem owns all otherwise-free
memory and will relinquish it on demand, then the same problems occur
while the toolstack is moving memory from owned-by-guests to
owned-by-tmem.

> > Surely that component needs to handle the exclusion too - otherwise a
> > series of small VM creations could stall a large one indefinitely.
> 
> Not sure I understand this, but it seems feasible.

If you ask for a large VM and a small VM to be started at about the same
time, the small VM will always win (since you'll free enough memory for
the small VM before you free enough for the big one).  If you then ask
for another small VM it will win again, and so forth, indefinitely
postponing the large VM in the waiting-for-memory state, unless some
agent explicitly enforces that VMs be started in order.  If you have
such an agent you probably don't need a hypervisor interlock as well.

I think it would be better to back up a bit.  Maybe you could sketch out
how you think [lib]xl ought to be handling ballooning/swapping/sharing/tmem
when it's starting VMs.  I don't have a strong objection to accounting
free memory to particular domains if it turns out to be useful, but as
always I prefer not to have things happen in the hypervisor if they
could happen in less privileged code.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.