|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/arm: Increase DOM0_FDT_EXTRA_SIZE to support max reserved memory banks
On 27/03/2026 13:23, Oleksandr Tyshchenko wrote: > > > On 3/27/26 09:30, Orzel, Michal wrote: > > Hello Michal > >> >> >> On 26/03/2026 20:03, Oleksandr Tyshchenko wrote: >>> >>> >>> On 3/26/26 18:50, Orzel, Michal wrote: >>> >>> Hello Michal >>> >>>> >>>> >>>> On 26/03/2026 14:15, Oleksandr Tyshchenko wrote: >>>>> Xen fails to construct the hardware domain's device tree with >>>>> FDT_ERR_NOSPACE (-3) when the host memory map is highly fragmented >>>>> (e.g., numerous reserved memory regions). >>>>> >>>>> This occurs because DOM0_FDT_EXTRA_SIZE underestimates the space >>>>> required for the generated extra /memory node. make_memory_node() >>>> Where does this extra /memory node come from? If this is for normal >>>> reserved >>>> memory regions, they should be present in the host dtb and therefore >>>> accounted >>>> by fdt_totalsize (the host dtb should have reserved regions described in >>>> /memory >>>> and /reserved-memory. Are you trying to account for static shm regions? >>> >>> >>> I might have misunderstood something, but here is my analysis: >>> >>> The extra /memory node is generated by Xen itself in handle_node() -> >>> make_memory_node() (please refer to the if ( reserved_mem->nr_banks > 0 >>> ) check). >>> >>> Even though the normal reserved memory regions are present in the host >>> DTB (and thus accounted for in fdt_totalsize), Xen generates a new >>> /memory node specifically for the hardware domain to describe these >>> regions as reserved but present in the memory map. And since this node >>> is generated at runtime (it is not a direct copy from the host DTB), >>> its size must be covered by DOM0_FDT_EXTRA_SIZE. >> Yes, but the original DTB should also have these reserved regions described >> in >> /memory nodes, thus taking up some space that is already accounted in >> fdt_totalsize. Are you trying to say that in host DTB, these reserved ranges >> fit >> nicely into e.g. a single /memory node range (i.e. a single reg pair covering >> most of the RAM)? > > yes > > > I can see that it might be possible but the commit msg needs >> to be clear about it. As of now, it reads as if the problem occured always >> when >> there are multiple reserved memory regions. That's not true if a host DTB >> generates one /memory per one /reserved. > > Yes, you are correct that the total size depends on how the host DTB is > structured compared to how Xen regenerates it at runtime. So, the issue > can arise if host DTB represents RAM using a single, large reg entry or > a few entries. > > *** > > I will update the commit message to clarify that, something like below: > > Xen fails to construct the hardware domain's device tree with > FDT_ERR_NOSPACE (-3) when the host memory map is highly fragmented > (e.g., numerous reserved memory regions) and the host DTB represents > RAM compactly (e.g., a single reg pair or just a few). > > This occurs because DOM0_FDT_EXTRA_SIZE underestimates the space > required for the generated extra /memory node. While the host DTB > might represent RAM compactly, make_memory_node() aggregates > all reserved regions into a single reg property. > With NR_MEM_BANKS (256) and 64-bit address/size cells, this property > can grow up to 4KB (256 * 16), easily exceeding the space originally > occupied by the host DTB's nodes plus the current padding, thereby > overflowing the allocated buffer. This reads better. > > >> >> Another issue is with the static shm nodes. User specifies the regions in the >> domain configuration and Xen creates *additional* nodes under /reserved and >> /memory that afaict we don't account for. > > Yes, you are right. > > Since these SHM sub-nodes and properties are generated purely from the > Xen domain configuration and are not present in the host DTB, they have > zero space allocated for them in fdt_totalsize. > > So we need to redefine the macro. I propose the following formula that > separates the range data (16 bytes per bank in /memory) from the node > overhead (160 bytes per SHM region): What is included in these 160 bytes? Did you manually check all fdt functions inside make_shm_resv_memory_node? > > #define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry) + \ > (NR_MEM_BANKS * 16) + \ > (NR_SHMEM_BANKS * 160)) I think you only accounted for shm nodes under /reserved-memory. As any other reserved memory node, they are also added to /memory reg property (see DT_MEM_NODE_REG_RANGE_SIZE). ~Michal
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |