[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] freemem-slack and large memory environments



On Wed, 25 Feb 2015, Ian Campbell wrote:
> On Wed, 2015-02-25 at 12:00 +0000, Ian Campbell wrote:
> > On Wed, 2015-02-25 at 11:39 +0000, Stefano Stabellini wrote:
> > > On Tue, 24 Feb 2015, Ian Campbell wrote:
> > > > On Tue, 2015-02-24 at 16:06 +0000, Stefano Stabellini wrote:
> > > > > > Now that we autodetect the use of dom0_mem and set autoballooning
> > > > > > correctly perhaps we should just revert a39b5bc64?
> > > > > 
> > > > > We could do that and theoretically it makes perfect sense, but it 
> > > > > would
> > > > > result in an even bigger waste of memory.
> > > > 
> > > > Would it, even though we now detect dom0_mem usage and do the right
> > > > thing? I thought a39b5bc64 was a workaround for autoballooning=1
> > > > in /etc/xen/xl.conf when dom0 was used.
> > > > 
> > > > 
> > > > > I think we should either introduce an hard upper limit for
> > > > > freemem-slack as Mike suggested, or remove freemem-slack altogether 
> > > > > and
> > > > > properly fix any issues caused by lack of memory in the system 
> > > > > (properly
> > > > > account memory usage).
> > > > > After all we are just at the beginning of the release cycle, it is the
> > > > > right time to do this.
> > > > 
> > > > I'm all in favour of someone doing this, similarly to
> > > > http://bugs.xenproject.org/xen/bug/23
> > > > 
> > > > Who is going to do that (either one)?
> > > 
> > > I am OK with sending the patch for both
> > 
> > Super, thanks.
> 
> Is the upshot that Mike doesn't need to do anything further with his
> patch (i.e. can drop it)? I think so?

Yes, I think so. Maybe he could help out testing the patches I am going
to write :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.