[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] freemem-slack and large memory environments
On Tue, 24 Feb 2015, Ian Campbell wrote: > On Tue, 2015-02-24 at 16:06 +0000, Stefano Stabellini wrote: > > > Now that we autodetect the use of dom0_mem and set autoballooning > > > correctly perhaps we should just revert a39b5bc64? > > > > We could do that and theoretically it makes perfect sense, but it would > > result in an even bigger waste of memory. > > Would it, even though we now detect dom0_mem usage and do the right > thing? I thought a39b5bc64 was a workaround for autoballooning=1 > in /etc/xen/xl.conf when dom0 was used. > > > > I think we should either introduce an hard upper limit for > > freemem-slack as Mike suggested, or remove freemem-slack altogether and > > properly fix any issues caused by lack of memory in the system (properly > > account memory usage). > > After all we are just at the beginning of the release cycle, it is the > > right time to do this. > > I'm all in favour of someone doing this, similarly to > http://bugs.xenproject.org/xen/bug/23 > > Who is going to do that (either one)? I am OK with sending the patch for both > > > > > > > Ian. > > > > > > > > > > > > > It seems that there are two approaches to resolve this: > > > > > > > > > > - Introduce a hard limit on freemem-slack to avoid unnecessarily > > > > > large > > > > > reservations > > > > > - Increase the retry count during domain creation to ensure enough > > > > > time is > > > > > set aside for any cycles spent freeing memory for freemem-slack (on > > > > > the test > > > > > machine, doubling the retry count to 6 is the minimum required) > > > > > > > > > > Which is the best approach (or did I miss something)? > > > > > > > > Sorry - forgot to CC relevant maintainers. > > > > > > > > -Mike > > > > > > > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |