[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH for-4.17 1/2] docs: Document the minimal requirement of static heap



The static heap feature requires user to know the minimal size of
heap to make sure the system can work. Since the heap controlled
by Xen is intended to provide memory for the whole system, not only
the boot time memory allocation should be covered by the static
heap region, but also the runtime allocation should be covered.

The main source of runtime allocation is the memory for the P2M.
Currently, from XSA-409, the P2M memory is bounded by the P2M pool.
So make this part as the minimal requirement of static heap. The
amount of memory allocated after all the guests have been created
should be quite limited and mostly predictable.

This commit adds documentation that explains how a user can size the
static heap region.

Signed-off-by: Henry Wang <Henry.Wang@xxxxxxx>
---
 docs/misc/arm/device-tree/booting.txt | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt 
b/docs/misc/arm/device-tree/booting.txt
index 87eaa3e254..046f28ce31 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -531,6 +531,13 @@ Below is an example on how to specify the static heap in 
device tree:
 RAM starting from the host physical address 0x30000000 of 1GB size will
 be reserved as static heap.
 
+Users should be mindful that the static heap should at least satisfy the
+allocation of the P2M maps for all guests. Currently, the minimal requirement
+of per-domain P2M pages pool is in-sync with function
+libxl__get_required_paging_memory() (for xl-created domUs) and
+domain_p2m_pages() (for dom0less domUs), that is, 1MB per vCPU, plus 4KiB per
+MiB of RAM for the P2M map, and plus 512KiB to cover extended regions.
+
 Static Shared Memory
 ====================
 
-- 
2.17.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.