[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen ballooning interface
On 13/08/18 15:54, Jan Beulich wrote: >>>> On 13.08.18 at 15:06, <jgross@xxxxxxxx> wrote: >> Suggested new interface >> ----------------------- >> Hypercalls, memory map(s) and ACPI tables should stay the same (for >> compatibility reasons or because they are architectural interfaces). >> >> As the main confusion in the current interface is related to the >> specification of the target memory size this part of the interface >> should be changed: specifying the size of the ballooned area instead >> is much clearer and will be the same for all guest types (no firmware >> memory or magic additions involved). > > But isn't this backwards? The balloon size is a piece of information > internal to the guest. Why should the outside world know or care? Instead of specifying an absolute value to reach you'd specify how much memory the guest should stay below its maximum. I think this is a valid approach. > What if the guest internals don't even allow the balloon to be the > size requested? Same as today: what if the guest internals don't even allow to reach the requested target size? > >> Open questions >> -------------- >> Should we add memory size information to the memory/vnode<n> nodes? >> >> Should the guest add information about its current balloon sizes to the >> memory/vnode<n> nodes (i.e. after ballooning, or every x seconds while >> ballooning)? >> >> Should we specify whether the guest is free to balloon another vnode >> than specified? > > Ballooning out _some_ memory is always going to be better than > ballooning out none at all. I think the node can only serve as a hint > here. I agree. I just wanted to point out we need to define the possible reactions to such a situation. > >> Should memory hotplug (at least for PV domains) use the vnode specific >> Xenstore paths, too, if supported by the guest? >> >> >> Any further thoughts on this? > > The other problem we've always had was that address information > could not be conveyed to the driver. The worst example in the past > was that 32-bit PV domains can't run on arbitrarily high underlying > physical addresses, but of course there are other cases where > memory below a certain boundary may be needed. The obvious > problem with directly exposing address information through the > interface is that for HVM guests machine addresses are meaningless. > Hence I wonder whether a dedicated "balloon out this page if you > can" mechanism would be something to consider. Isn't this a problem orthogonal to the one we are discussing here? I'd rather do a localhost guest migration to free specific pages a guest is owning and tell the Xen memory allocator not to hand them out to the new guest created by the migration. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |