[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen ballooning interface
On Tue, Aug 21, 2018 at 10:58:18AM +0100, Wei Liu wrote: > On Mon, Aug 13, 2018 at 03:06:10PM +0200, Juergen Gross wrote: > > Today's interface of Xen for memory ballooning is quite a mess. There > > are some shortcomings which should be addressed somehow. After a > > discussion on IRC there was consensus we should try to design a new > > interface addressing the current and probably future needs. > > > > Current interface > > ----------------- > > A guest has access to the following memory related information (all for > > x86): > > > > - the memory map (E820 or EFI) > > - ACPI tables for HVM/PVH guests > > - actual maximum size via XENMEM_maximum_reservation hypercall (the > > hypervisor will deny attempts of the guest to allocate more) > > - current size via XENMEM_current_reservation hypercall > > - Xenstore entry "memory/static-max" for the upper bound of memory size > > (information for the guest which memory size might be reached without > > hotplugging memory) > > - Xenstore entry "memory/target" for current target size (used for > > ballooning: Xen tools set the size the guest should try to reach by > > allocating or releasing memory) > > > > The main problem with this interface is the guest doesn't know in all > > cases which memory is included in the values (e.g. memory allocated by > > Xen tools for the firmware of a HVM guest is included in the Xenstore > > and hypercall information, but not in the memory map). > > > > Somewhat related: who has the canonical source of all the information? > I think Xen should have that, but it is unclear to me how toolstack can > get such information from Xen. ISTR currently it is possible to get > current number of pages and maximum numbers of pages, both of which > contain pages for firmware which are visible to guests (E820 / EFI > reserved). > > Without that fixed, the new interface won't be of much use because the > information toolstack put in the new nodes is still potentially wrong. > Currently toolstack applies some constant fudge numbers, which is a bit > unpleasant. > > It would be at least useful to break down the accounting inside the > hypervisor a bit more: > > * max_pages : maximum number of pages a domain can use for whatever > purpose (ram + firmware + others) > * curr_pages : current number of pages a domain is using (ram + ...) > * max_ram_pages : maximum number of pages a domain can use for ram > * curr_ram_pages : ... The problem here is that new hypercalls would have to be added, because firmware running inside the guest picks RAM regions and changes them to reserved for example, and the firmware would need a way to tell Xen about those changes. We could even have something like an expanded memory map with more types in order to describe MMIO regions trapped inside of the hypervisor, firmware regions, ram, etc... that could be modified by both the toolstack and Xen. Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |