[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: Increase DOM0_FDT_EXTRA_SIZE to support max reserved memory banks


  • To: "Orzel, Michal" <michal.orzel@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>
  • Date: Tue, 31 Mar 2026 14:10:47 +0000
  • Accept-language: en-US, ru-RU
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UOWCIWGQT2sgpAu08DLbsyMlb4Z/92ic4fk1qUDuUf4=; b=oK3KFgThkveN52qZaG9WwaRCVfiNatg+nLB2+utiFqFLvdMh/vYJ3SoU9wtKlLPmlK30Q3Migs+ADHfvWj4/g4txXD1fGyR9qhwFTBqpPloNXxPHloOVIGzI4irdU9SMmnah7VhQ+wjKnCmB6KIxkKjPngQty8X0vimu0RPDF9N0cR7ZML3jD/vG7wFdI03uZJSDDhE5CIWDVm/u1g2/OKGXho71c0bp+ZbBmMwlOn4ixC9UqvYdcCy0iQ25TOlayrRhfLTAbkKRLVHQUArTHB3mX9g2pL5dSuHGreMoVcrgMBi3H8B9RTST+0IfV+J2bs6Tr1MV3Uz3TbNaWeQ4+w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=b6Hfu7SnhrF6JWdpewHGxVSnQ0vIHSxOxQJjIAhmDt78aL/vK/Oxy0axWzrpxam8AT1n9wDaXbOZFrSU/dqh7b5Nk+9rwy3zyleMxcuyNFZXLaotHSW8QyiOxoorOECoHMelaw80nI5+kYQnFvwJJF6evRgoE0pC2jbvBxJvNusKSKwpECxC4m3mo8MnkqGpiJr5YPugWJq/ZtkwEVevEOeZL/V8P1H3EoWEg2MftPE5jX2HPDImLLf7aoTICvcs7fi0Z9Qj5XHhaEkQL2Q+qzOcwd4AqSLqqa1jFYI17DS4pHk/1Ey+35VAC/KsDe2C7R//9sn0wJM5iM7Fxuh5zQ==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=epam.com header.i="@epam.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:x-ms-exchange-senderadcheck"
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Tue, 31 Mar 2026 14:10:59 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHcvSKiuRz6O/EU8kCzrBnRPjf187XBBxoAgAAlNICAANCdgIAAUd4AgAYDCgCAAGQ5AA==
  • Thread-topic: [PATCH] xen/arm: Increase DOM0_FDT_EXTRA_SIZE to support max reserved memory banks


On 3/31/26 11:12, Orzel, Michal wrote:

Hello Michal


> 
> 
> On 27/03/2026 13:23, Oleksandr Tyshchenko wrote:
>>
>>
>> On 3/27/26 09:30, Orzel, Michal wrote:
>>
>> Hello Michal
>>
>>>
>>>
>>> On 26/03/2026 20:03, Oleksandr Tyshchenko wrote:
>>>>
>>>>
>>>> On 3/26/26 18:50, Orzel, Michal wrote:
>>>>
>>>> Hello Michal
>>>>
>>>>>
>>>>>
>>>>> On 26/03/2026 14:15, Oleksandr Tyshchenko wrote:
>>>>>> Xen fails to construct the hardware domain's device tree with
>>>>>> FDT_ERR_NOSPACE (-3) when the host memory map is highly fragmented
>>>>>> (e.g., numerous reserved memory regions).
>>>>>>
>>>>>> This occurs because DOM0_FDT_EXTRA_SIZE underestimates the space
>>>>>> required for the generated extra /memory node. make_memory_node()
>>>>> Where does this extra /memory node come from? If this is for normal 
>>>>> reserved
>>>>> memory regions, they should be present in the host dtb and therefore 
>>>>> accounted
>>>>> by fdt_totalsize (the host dtb should have reserved regions described in 
>>>>> /memory
>>>>> and /reserved-memory. Are you trying to account for static shm regions?
>>>>
>>>>
>>>> I might have misunderstood something, but here is my analysis:
>>>>
>>>> The extra /memory node is generated by Xen itself in handle_node() ->
>>>> make_memory_node() (please refer to the if ( reserved_mem->nr_banks > 0
>>>> ) check).
>>>>
>>>> Even though the normal reserved memory regions are present in the host
>>>> DTB (and thus accounted for in fdt_totalsize), Xen generates a new
>>>> /memory node specifically for the hardware domain to describe these
>>>> regions as reserved but present in the memory map. And since this node
>>>> is generated at runtime (it is not a direct copy from the host DTB),
>>>> its size must be covered by DOM0_FDT_EXTRA_SIZE.
>>> Yes, but the original DTB should also have these reserved regions described 
>>> in
>>> /memory nodes, thus taking up some space that is already accounted in
>>> fdt_totalsize. Are you trying to say that in host DTB, these reserved 
>>> ranges fit
>>> nicely into e.g. a single /memory node range (i.e. a single reg pair 
>>> covering
>>> most of the RAM)?
>>
>> yes
>>
>>
>>    I can see that it might be possible but the commit msg needs
>>> to be clear about it. As of now, it reads as if the problem occured always 
>>> when
>>> there are multiple reserved memory regions. That's not true if a host DTB
>>> generates one /memory per one /reserved.
>>
>> Yes, you are correct that the total size depends on how the host DTB is
>> structured compared to how Xen regenerates it at runtime. So, the issue
>> can arise if host DTB represents RAM using a single, large reg entry or
>> a few entries.
>>
>> ***
>>
>> I will update the commit message to clarify that, something like below:
>>
>> Xen fails to construct the hardware domain's device tree with
>> FDT_ERR_NOSPACE (-3) when the host memory map is highly fragmented
>> (e.g., numerous reserved memory regions) and the host DTB represents
>> RAM compactly (e.g., a single reg pair or just a few).
>>
>> This occurs because DOM0_FDT_EXTRA_SIZE underestimates the space
>> required for the generated extra /memory node. While the host DTB
>> might represent RAM compactly, make_memory_node() aggregates
>> all reserved regions into a single reg property.
>> With NR_MEM_BANKS (256) and 64-bit address/size cells, this property
>> can grow up to 4KB (256 * 16), easily exceeding the space originally
>> occupied by the host DTB's nodes plus the current padding, thereby
>> overflowing the allocated buffer.
> This reads better.

ok


> 
>>
>>
>>>
>>> Another issue is with the static shm nodes. User specifies the regions in 
>>> the
>>> domain configuration and Xen creates *additional* nodes under /reserved and
>>> /memory that afaict we don't account for.
>>
>> Yes, you are right.
>>
>> Since these SHM sub-nodes and properties are generated purely from the
>> Xen domain configuration and are not present in the host DTB, they have
>> zero space allocated for them in fdt_totalsize.
>>
>> So we need to redefine the macro. I propose the following formula that
>> separates the range data (16 bytes per bank in /memory) from the node
>> overhead (160 bytes per SHM region):
> What is included in these 160 bytes? Did you manually check all fdt functions
> inside make_shm_resv_memory_node?

According to my calculations (which, of course, might be not precise):

- FDT_BEGIN_NODE + xen-shmem@ffffffffffffffff\0 (27b padded to 28): 32 bytes
- compatible (12b header + 21b string padded to 24): 36 bytes
- reg (12b header + 16b payload [4 cells]): 28 bytes
- xen,id (12b header + 16b max string [15 chars + \0]): 28 bytes
- xen,offset (12b header + 8b payload): 20 bytes
- FDT_END_NODE: 4 bytes
Total exact node payload: 148 bytes. I also added 12-byte margin (so it 
gets rounded up to the nearest 16-byte boundary).

> 
>>
>> #define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry) + \
>>                               (NR_MEM_BANKS * 16) +                    \
>>                               (NR_SHMEM_BANKS * 160))
> I think you only accounted for shm nodes under /reserved-memory. As any other
> reserved memory node, they are also added to /memory reg property (see
> DT_MEM_NODE_REG_RANGE_SIZE).

You are right, and I completely missed this in my original calculation. 
I mistakenly believed (NR_MEM_BANKS * 16) would cover the entire 
capacity of the /memory node's reg.

The shm_mem_node_fill_reg_range() appends the shared memory banks 
directly into the main /memory node's reg. Each SHM bank adds 16 bytes 
(4 cells = 16 bytes) to the main memory node.

So, I will refine the macro to explicitly reflect both the 160-byte 
discrete sub-node and the 16-byte extra to the /memory node:

#define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry) + \
                              (NR_MEM_BANKS * 16) +                    \
                              (NR_SHMEM_BANKS * (160 + 16)))

Or wait, we can actually drop the SHM overhead entirely when 
CONFIG_STATIC_SHM=n:

#define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry) + \
                              (NR_MEM_BANKS * 16) +                    \
                              (IS_ENABLED(CONFIG_STATIC_SHM) ?         \
                              (NR_SHMEM_BANKS * (160 + 16)) : 0))


> 
> ~Michal
> 

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.