[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: Increase DOM0_FDT_EXTRA_SIZE to support max reserved memory banks


  • To: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: "Orzel, Michal" <michal.orzel@xxxxxxx>
  • Date: Tue, 31 Mar 2026 10:12:00 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=epam.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oGPAd8neUU9kKoIMeNf8cyqwhc/9bQmf4M/nXP+V9Yo=; b=h0LTBfw35mzHpql6DvamVBpUnMz71wxr7uC0W+UfWzwk4GEXzbSYwgaxVuxt023Gsbn9/6K3tFNt4nwlZRrUy97Vhj2Vj4OdEDq+vZR1+pTmWX5PUDsTs8wga9qTBNany8zVaWYMGo8WtRGI581QraY+CumOhWl8nr8Mr8PT2K5JaQdZah2lEFm7LdXTU9MVCfDy/EYw8rzVsWKSmvGnN8yrxmwLjciQOzfmrMVAHpyQbKROW17jCI32QnVSW6l4IswzfdjdnRjWYLICvmpKMqTqTR9dKbiOxkX0Qs2rMlsJIkShBwmmzY+dcCsGi+Oz+wOEpbDEqf2XdOW5BGd7YQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=fTMfd2lGWK4vV2uXljPeUkVAzqctnzbs2JRVmtcRXWHKTNTHXT1bzDa4zlDvZD1F6COmIxvymGRRWhVwHzKVKLHVj3677SOZa0OKu3oZ0O0e/xA637v4Ly2A8cSatedmwMGMJDNaT6pcDhoqOvW6GZrcfJi7P2HZ9Yz1MBEtBLA/kWHrxsDgOha8HIU4YS4R4uM/ILSnX9OVkY2q39G20TOoKWLAb119VvtvGtU5wq3JjJ00C8CeZFmnwb+WMqBueVQuNjDSBsvN0Up3e0h1/mOBCVFME1l8cEmXsglShGofxhWGIN8GoTlI7mSDT1wHWtNhSuNOkIM7OomxN7j7hw==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=amd.com header.i="@amd.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck"
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, "Volodymyr Babchuk" <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Tue, 31 Mar 2026 08:12:16 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 27/03/2026 13:23, Oleksandr Tyshchenko wrote:
> 
> 
> On 3/27/26 09:30, Orzel, Michal wrote:
> 
> Hello Michal
> 
>>
>>
>> On 26/03/2026 20:03, Oleksandr Tyshchenko wrote:
>>>
>>>
>>> On 3/26/26 18:50, Orzel, Michal wrote:
>>>
>>> Hello Michal
>>>
>>>>
>>>>
>>>> On 26/03/2026 14:15, Oleksandr Tyshchenko wrote:
>>>>> Xen fails to construct the hardware domain's device tree with
>>>>> FDT_ERR_NOSPACE (-3) when the host memory map is highly fragmented
>>>>> (e.g., numerous reserved memory regions).
>>>>>
>>>>> This occurs because DOM0_FDT_EXTRA_SIZE underestimates the space
>>>>> required for the generated extra /memory node. make_memory_node()
>>>> Where does this extra /memory node come from? If this is for normal 
>>>> reserved
>>>> memory regions, they should be present in the host dtb and therefore 
>>>> accounted
>>>> by fdt_totalsize (the host dtb should have reserved regions described in 
>>>> /memory
>>>> and /reserved-memory. Are you trying to account for static shm regions?
>>>
>>>
>>> I might have misunderstood something, but here is my analysis:
>>>
>>> The extra /memory node is generated by Xen itself in handle_node() ->
>>> make_memory_node() (please refer to the if ( reserved_mem->nr_banks > 0
>>> ) check).
>>>
>>> Even though the normal reserved memory regions are present in the host
>>> DTB (and thus accounted for in fdt_totalsize), Xen generates a new
>>> /memory node specifically for the hardware domain to describe these
>>> regions as reserved but present in the memory map. And since this node
>>> is generated at runtime (it is not a direct copy from the host DTB),
>>> its size must be covered by DOM0_FDT_EXTRA_SIZE.
>> Yes, but the original DTB should also have these reserved regions described 
>> in
>> /memory nodes, thus taking up some space that is already accounted in
>> fdt_totalsize. Are you trying to say that in host DTB, these reserved ranges 
>> fit
>> nicely into e.g. a single /memory node range (i.e. a single reg pair covering
>> most of the RAM)?
> 
> yes
> 
> 
>   I can see that it might be possible but the commit msg needs
>> to be clear about it. As of now, it reads as if the problem occured always 
>> when
>> there are multiple reserved memory regions. That's not true if a host DTB
>> generates one /memory per one /reserved.
> 
> Yes, you are correct that the total size depends on how the host DTB is 
> structured compared to how Xen regenerates it at runtime. So, the issue 
> can arise if host DTB represents RAM using a single, large reg entry or 
> a few entries.
> 
> ***
> 
> I will update the commit message to clarify that, something like below:
> 
> Xen fails to construct the hardware domain's device tree with
> FDT_ERR_NOSPACE (-3) when the host memory map is highly fragmented
> (e.g., numerous reserved memory regions) and the host DTB represents
> RAM compactly (e.g., a single reg pair or just a few).
> 
> This occurs because DOM0_FDT_EXTRA_SIZE underestimates the space
> required for the generated extra /memory node. While the host DTB
> might represent RAM compactly, make_memory_node() aggregates
> all reserved regions into a single reg property.
> With NR_MEM_BANKS (256) and 64-bit address/size cells, this property
> can grow up to 4KB (256 * 16), easily exceeding the space originally
> occupied by the host DTB's nodes plus the current padding, thereby
> overflowing the allocated buffer.
This reads better.

> 
> 
>>
>> Another issue is with the static shm nodes. User specifies the regions in the
>> domain configuration and Xen creates *additional* nodes under /reserved and
>> /memory that afaict we don't account for.
> 
> Yes, you are right.
> 
> Since these SHM sub-nodes and properties are generated purely from the 
> Xen domain configuration and are not present in the host DTB, they have 
> zero space allocated for them in fdt_totalsize.
> 
> So we need to redefine the macro. I propose the following formula that 
> separates the range data (16 bytes per bank in /memory) from the node 
> overhead (160 bytes per SHM region):
What is included in these 160 bytes? Did you manually check all fdt functions
inside make_shm_resv_memory_node?

> 
> #define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry) + \
>                              (NR_MEM_BANKS * 16) +                    \
>                              (NR_SHMEM_BANKS * 160))
I think you only accounted for shm nodes under /reserved-memory. As any other
reserved memory node, they are also added to /memory reg property (see
DT_MEM_NODE_REG_RANGE_SIZE).

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.