[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
Create one design doc for 1:1 direct-map and static allocation. It is the first draft and aims to describe why and how we allocate 1:1 direct-map(guest physical == physical) domains, and why and how we let domains on static allocation. Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx> --- docs/designs/static_alloc_and_direct_map.md | 239 ++++++++++++++++++++ 1 file changed, 239 insertions(+) create mode 100644 docs/designs/static_alloc_and_direct_map.md diff --git a/docs/designs/static_alloc_and_direct_map.md b/docs/designs/static_alloc_and_direct_map.md new file mode 100644 index 0000000000..fdda162188 --- /dev/null +++ b/docs/designs/static_alloc_and_direct_map.md @@ -0,0 +1,239 @@ +# Preface + +The document is an early draft for 1:1 direct-map memory map +(`guest physical == physical`) of domUs and Static Allocation. +Since the implementation of these two features shares a lot, we would like +to introduce both in one design. + +Right now, these two features are limited to ARM architecture. + +This design aims to describe why and how the guest would be created as 1:1 +direct-map domain, and why and what the static allocation is. + +This document is partly based on Stefano Stabellini's patch serie v1: +[direct-map DomUs]( +https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html). + +This is a first draft and some questions are still unanswered. When this is +the case, it will be included under chapter `DISCUSSION`. + +# Introduction on Static Allocation + +Static allocation refers to system or sub-system(domains) for which memory +areas are pre-defined by configuration using physical address ranges. + +## Background + +Cases where needs static allocation: + + * Static allocation needed whenever a system has a pre-defined non-changing +behaviour. This is usually the case in safety world where system must behave +the same upon reboot, so memory resource for both XEN and domains should be +static and pre-defined. + + * Static allocation needed whenever a guest wants to allocate memory +from refined memory ranges. For example, a system has one high-speed RAM +region, and would like to assign it to one specific domain. + + * Static allocation needed whenever a system needs a guest restricted to some +known memory area due to hardware limitations reason. For example, some device +can only do DMA to a specific part of the memory. + +Limitations: + * There is no consideration for PV devices at the moment. + +## Design on Static Allocation + +Static allocation refers to system or sub-system(domains) for which memory +areas are pre-defined by configuration using physical address ranges. + +These pre-defined memory, -- Static Momery, as parts of RAM reserved in the +beginning, shall never go to heap allocator or boot allocator for any use. + +### Static Allocation for Domains + +### New Deivce Tree Node: `xen,static_mem` + +Here introduces new `xen,static_mem` node to define static memory nodes for +one specific domain. + +For domains on static allocation, users need to pre-define guest RAM regions in +configuration, through `xen,static_mem` node under approriate `domUx` node. + +Here is one example: + + + domU1 { + compatible = "xen,domain"; + #address-cells = <0x2>; + #size-cells = <0x2>; + cpus = <2>; + xen,static-mem = <0x0 0xa0000000 0x0 0x20000000>; + ... + }; + +RAM at 0xa0000000 of 512 MB are static memory reserved for domU1 as its RAM. + +### New Page Flag: `PGC_reserved` + +In order to differentiate and manage pages reserved as static memory with +those which are allocated from heap allocator for normal domains, we shall +introduce a new page flag `PGC_reserved` to tell. + +Grant pages `PGC_reserved` when initializing static memory. + +### New linked page list: `reserved_page_list` in `struct domain` + +Right now, for normal domains, on assigning pages to domain, pages allocated +from heap allocator as guest RAM shall be inserted to one linked page +list `page_list` for later managing and storing. + +So in order to tell, pages allocated from static memory, shall be inserted +to a different linked page list `reserved_page_list`. + +Later, when domain get destroyed and memory relinquished, only pages in +`page_list` go back to heap, and pages in `reserved_page_list` shall not. + +### Memory Allocation for Domains on Static Allocation + +RAM regions pre-defined as static memory for one specifc domain shall be parsed +and reserved from the beginning. And they shall never go to any memory +allocator for any use. + +Later when allocating static memory for this specific domain, after acquiring +those reserved regions, users need to a do set of verification before +assigning. +For each page there, it at least includes the following steps: +1. Check if it is in free state and has zero reference count. +2. Check if the page is reserved(`PGC_reserved`). + +Then, assigning these pages to this specific domain, and all pages go to one +new linked page list `reserved_page_list`. + +At last, set up guest P2M mapping. By default, it shall be mapped to the fixed +guest RAM address `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`, just like normal +domains. But later in 1:1 direct-map design, if `direct-map` is set, the guest +physical address will equal to physical address. + +### Static Allocation for Xen itself + +### New Deivce Tree Node: `xen,reserved_heap` + +Static memory for Xen heap refers to parts of RAM reserved in the beginning +for Xen heap only. The memory is pre-defined through XEN configuration +using physical address ranges. + +The reserved memory for Xen heap is an optional feature and can be enabled +by adding a device tree property in the `chosen` node. Currently, this feature +is only supported on AArch64. + +Here is one example: + + + chosen { + xen,reserved-heap = <0x0 0x30000000 0x0 0x40000000>; + ... + }; + +RAM at 0x30000000 of 1G size will be reserved as heap memory. Later, heap +allocator will allocate memory only from this specific region. + +# Introduction on 1:1 direct-map + +## Background + +Cases where domU needs 1:1 direct-map memory map: + + * IOMMU not present in the system. + * IOMMU disabled if it doesn't cover a specific device and all the guests +are trusted. Thinking a mixed scenario, where a few devices with IOMMU and +a few without, then guest DMA security still could not be totally guaranteed. +So users may want to disable the IOMMU, to at least gain some performance +improvement from IOMMU disabled. + * IOMMU disabled as a workaround when it doesn't have enough bandwidth. +To be specific, in a few extreme situation, when multiple devices do DMA +concurrently, these requests may exceed IOMMU's transmission capacity. + * IOMMU disabled when it adds too much latency on DMA. For example, +TLB may be missing in some IOMMU hardware, which may bring latency in DMA +progress, so users may want to disable it in some realtime scenario. + +*WARNING: +Users should be aware that it is not always secure to assign a device without +IOMMU/SMMU protection. +When the device is not protected by the IOMMU/SMMU, the administrator should +make sure that: + 1. The device is assigned to a trusted guest. + 2. Users have additional security mechanism on the platform. + +Limitations: + * There is no consideration for PV devices at the moment. + +## Design on 1:1 direct-map + +Here only supports 1:1 direct-map with user-defined memory regions. + +The implementation may cover following aspects: + +### Native Address and IRQ numbers for GIC and UART(vPL011) + +Today, fixed addresses and IRQ numbers are used to map GIC and UART(vPL011) +in DomUs. And it may cause potential clash on 1:1 direct-map domains. +So, Using native addresses and irq numbers for GIC, UART(vPL011), in +1:1 direct-map domains is necessary. + +For the virtual interrupt of vPL011: instead of always using +`GUEST_VPL011_SPI`, try to reuse the physical SPI number if possible. + +### New Device Tree Node: `direct-map` Option + +Introduce a new option `direct-map` for 1:1 direct-map domains. + +When users allocating an 1:1 direct-map domain, `direct-map` property needs +to be added under the appropriate `/chosen/domUx`. For now, since only +supporting 1:1 direct-map with user-defined memory regions, users must choose +RAM banks as 1:1 dirct-map guest RAM, through `xen,static-mem`, which has +been elaborated before in chapter `New Deivce Tree Node: `xen,static_mem``. + +Hers is one example to allocate one 1:1 direct-map domain: + + + chosen { + ... + domU1 { + compatible = "xen, domain"; + #address-cells = <0x2>; + #size-cells = <0x2>; + cpus = <2>; + vpl011; + direct-map; + xen,static-mem = <0x0 0x30000000 0x0 0x40000000>; + ... + }; + ... + }; + +DOMU1 is an 1:1 direct-map domain with reserved RAM at 0x30000000 of 1G size. + +### Memory Allocation for 1:1 direct-map Domain + +Implementing memory allocation for 1:1 direct-map domain includes two parts: +Static Allocation for Domain and 1:1 direct-map. + +The first part has been elaborated before in chapter `Memory Allocation for +Domains on Static Allocation`. Then, to ensure 1:1 direct-map, when setting up +guest P2M mapping, it needs to make sure that guest physical address equal to +physical address(`gfn == mfn`). + +*DISCUSSION: + + * Here only supports booting up one domain on static allocation or on 1:1 +direct-map through device tree, is `xl` also needed? + + * Here only supports 1:1 direct-map domain with user-defined memory regions, +is 1:1 direct-map domain with arbitrary memory regions also needed? We had +quite a discussion [here]( +https://patchew.org/Xen/20201208052113.1641514-1-penny.zheng@xxxxxxx/). In +order to mitigate guest memory fragementation, we introduce static memory pool( +same implementation as `xen,reserved-heap`) and static memory allocator(a new +linear memory allocator, very alike boot allocator). This new allocator is also +applied to MPU system, so I may create a new design on this to elaborate more. -- 2.17.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |