[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1 of 4] Prevent low values of max_pages for domains doing sharing or paging
At 22:57 -0500 on 15 Feb (1329346624), Andres Lagar-Cavilla wrote: > xen/common/domctl.c | 8 +++++++- > 1 files changed, 7 insertions(+), 1 deletions(-) > > > Apparently, setting d->max_pages to something lower than d->tot_pages is > used as a mechanism for controling a domain's footprint. It will result > in all new page allocations failing. Yep. > This is a really bad idea with paging or sharing, as regular guest memory > accesses may need to be satisfied by allocating new memory (either to > page in or to unshare). Nack. If a domain ends up with a max_pages so low that it can't page in, that's a tools bug. This patch doesn't fix it, because the toolstack could set new max == current tot (er, +1) and then you have the same problem if you page in twice. (And also it silently ignores the update rather than reporting an error.) Tim. > Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx> > > diff -r 62b1fe67b8d1 -r 11fd4e0a1e1a xen/common/domctl.c > --- a/xen/common/domctl.c > +++ b/xen/common/domctl.c > @@ -813,8 +813,14 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domc > * NB. We removed a check that new_max >= current tot_pages; this > means > * that the domain will now be allowed to "ratchet" down to new_max. > In > * the meantime, while tot > max, all new allocations are disallowed. > + * > + * Except that this is not a great idea for domains doing sharing or > + * paging, as they need to perform new allocations on the fly. > */ > - d->max_pages = new_max; > + if ( (new_max > d->max_pages) || > + !((d->mem_event->paging.ring_page != NULL) || > + d->arch.hvm_domain.mem_sharing_enabled) ) > + d->max_pages = new_max; > ret = 0; > spin_unlock(&d->page_alloc_lock); > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |