[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen ballooning interface
>>> On 13.08.18 at 15:06, <jgross@xxxxxxxx> wrote: > Suggested new interface > ----------------------- > Hypercalls, memory map(s) and ACPI tables should stay the same (for > compatibility reasons or because they are architectural interfaces). > > As the main confusion in the current interface is related to the > specification of the target memory size this part of the interface > should be changed: specifying the size of the ballooned area instead > is much clearer and will be the same for all guest types (no firmware > memory or magic additions involved). But isn't this backwards? The balloon size is a piece of information internal to the guest. Why should the outside world know or care? What if the guest internals don't even allow the balloon to be the size requested? > Open questions > -------------- > Should we add memory size information to the memory/vnode<n> nodes? > > Should the guest add information about its current balloon sizes to the > memory/vnode<n> nodes (i.e. after ballooning, or every x seconds while > ballooning)? > > Should we specify whether the guest is free to balloon another vnode > than specified? Ballooning out _some_ memory is always going to be better than ballooning out none at all. I think the node can only serve as a hint here. > Should memory hotplug (at least for PV domains) use the vnode specific > Xenstore paths, too, if supported by the guest? > > > Any further thoughts on this? The other problem we've always had was that address information could not be conveyed to the driver. The worst example in the past was that 32-bit PV domains can't run on arbitrarily high underlying physical addresses, but of course there are other cases where memory below a certain boundary may be needed. The obvious problem with directly exposing address information through the interface is that for HVM guests machine addresses are meaningless. Hence I wonder whether a dedicated "balloon out this page if you can" mechanism would be something to consider. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |