[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: Increase DOM0_FDT_EXTRA_SIZE to support max reserved memory banks


  • To: "Orzel, Michal" <michal.orzel@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>
  • Date: Thu, 26 Mar 2026 19:03:54 +0000
  • Accept-language: en-US, ru-RU
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cMdUdBfSw1KZwrKOD+g64NVOANA5HuqmzSPMj2dbWII=; b=keb+p5WF+6Sy4VGSN42r87PyqxVwxL/46T4R5UHJcMSm2BOFIozSPhdUzGTMTJzNGgRCCj4IKHv7sgtfBYy7JdEgZiKckMHCoSFePYaoBZ3XuPA5X6SObr9bM/cCX7Ifl6dcUAydxB2LBikZ9qxREXDY0fPd0MR6fL4z9OcZNodqALPwfD62ECoNJ5mX1p3jQXliJwlyTyOSFsRi7raUfjvBU4lUSzH0oYyWrz9UBeSNFhCOuV6XkJCHBAbWXJh4E1Jjo63VnuCpzh5ISG0S7l5ENayawDP2wpdf5cXwAu6nBpvJqBnqVriXqGaQvOyR+GITw8S4vTkBzbjV1uhAxQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=y1hOPlrnJ+TKqAeNQrQneonaVMnuGH05ZW2mI6LaeSq16hhKwNZSoLeNe2wPhcpYTgnJgFVixIzpQkvRAV0B+J1WXfc5v+O1V97QvSR/0XiHxgLwkPUI07eKbsEycXcpR8bHIYO1/XKk/WUQlD90cBTagWqFX0yIi0dL3uEMeOMIVm+PyhGmNgwMLptYQquWBTl4imZQ0KhnIOr+OxVJIBNB63IioqH+z+4KEuvYdBfArg708yU6mxiweK3BET3lbYx0erDOjPEtjEUjzJDed5OD+q7ElIJA/VajQD5w3xKRb6Rrk0QuwhiZ14y6qEo5DL+27eROVDD3k/aM1anPeQ==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=epam.com header.i="@epam.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:x-ms-exchange-senderadcheck"
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 26 Mar 2026 19:04:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHcvSKiuRz6O/EU8kCzrBnRPjf187XBBxoAgAAlNIA=
  • Thread-topic: [PATCH] xen/arm: Increase DOM0_FDT_EXTRA_SIZE to support max reserved memory banks


On 3/26/26 18:50, Orzel, Michal wrote:

Hello Michal

> 
> 
> On 26/03/2026 14:15, Oleksandr Tyshchenko wrote:
>> Xen fails to construct the hardware domain's device tree with
>> FDT_ERR_NOSPACE (-3) when the host memory map is highly fragmented
>> (e.g., numerous reserved memory regions).
>>
>> This occurs because DOM0_FDT_EXTRA_SIZE underestimates the space
>> required for the generated extra /memory node. make_memory_node()
> Where does this extra /memory node come from? If this is for normal reserved
> memory regions, they should be present in the host dtb and therefore accounted
> by fdt_totalsize (the host dtb should have reserved regions described in 
> /memory
> and /reserved-memory. Are you trying to account for static shm regions?


I might have misunderstood something, but here is my analysis:

The extra /memory node is generated by Xen itself in handle_node() -> 
make_memory_node() (please refer to the if ( reserved_mem->nr_banks > 0 
) check).

Even though the normal reserved memory regions are present in the host 
DTB (and thus accounted for in fdt_totalsize), Xen generates a new 
/memory node specifically for the hardware domain to describe these 
regions as reserved but present in the memory map. And since this node 
is generated at runtime (it is not a direct copy from the host DTB),
its size must be covered by DOM0_FDT_EXTRA_SIZE.

For the instance, 10 reserved regions:

(XEN) RAM: 0000000040000000 - 000000007fffffff
(XEN)
(XEN) MODULE[0]: 0000000043200000 - 000000004330afff Xen
(XEN) MODULE[1]: 0000000043400000 - 0000000043402fff Device Tree
(XEN) MODULE[2]: 0000000042e00000 - 000000004316907f Ramdisk
(XEN) MODULE[3]: 0000000040400000 - 0000000042d2ffff Kernel
(XEN)  RESVD[0]: 0000000040009000 - 0000000040009fff
(XEN)  RESVD[1]: 0000000040008000 - 0000000040008fff
(XEN)  RESVD[2]: 0000000040007000 - 0000000040007fff
(XEN)  RESVD[3]: 0000000040006000 - 0000000040006fff
(XEN)  RESVD[4]: 0000000040005000 - 0000000040005fff
(XEN)  RESVD[5]: 0000000040004000 - 0000000040004fff
(XEN)  RESVD[6]: 0000000040003000 - 0000000040003fff
(XEN)  RESVD[7]: 0000000040002000 - 0000000040002fff
(XEN)  RESVD[8]: 0000000040001000 - 0000000040001fff
(XEN)  RESVD[9]: 0000000040000000 - 0000000040000fff
...

 From make_memory_node():

(XEN) Create memory node
(XEN)   Bank 0: 0x50000000->0x70000000
(XEN) (reg size 4, nr cells 4)



(XEN) Create memory node
(XEN)   Bank 0: 0x40009000->0x4000a000
(XEN)   Bank 1: 0x40008000->0x40009000
(XEN)   Bank 2: 0x40007000->0x40008000
(XEN)   Bank 3: 0x40006000->0x40007000
(XEN)   Bank 4: 0x40005000->0x40006000
(XEN)   Bank 5: 0x40004000->0x40005000
(XEN)   Bank 6: 0x40003000->0x40004000
(XEN)   Bank 7: 0x40002000->0x40003000
(XEN)   Bank 8: 0x40001000->0x40002000
(XEN)   Bank 9: 0x40000000->0x40001000
(XEN) (reg size 4, nr cells 40)

> 
>> aggregates all reserved regions into a single reg property. With
>> NR_MEM_BANKS (256) and 64-bit address/size cells, this property
>> can grow up to 4KB (256 * 16), easily overflowing the allocated
>> buffer.
>>
>> Fix this by increasing DOM0_FDT_EXTRA_SIZE to account for
>> the worst-case size: NR_MEM_BANKS * 16 bytes.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>> ---
>> Just to be clear, I have not seen a real-world issue with this.
>> The issue was observed during testing of limit conditions.
>> With this patch applied, Xen successfully boots the hardware domain,
>> exposing 256 reserved memory regions to it (using a synthetically
>> generated configuration).
>> ---
>> ---
>>   xen/arch/arm/domain_build.c | 6 ++++--
>>   1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index e8795745dd..7f9f0f5510 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -100,9 +100,11 @@ int __init parse_arch_dom0_param(const char *s, const 
>> char *e)
>>   /*
>>    * Amount of extra space required to dom0's device tree.  No new nodes
> This comment would want to be updated because since its introduction things 
> have
> changed. Even the 128 came up as a result of adding /hypervisor node.

You are right. I suggest the following wording:

Amount of extra space required to dom0's device tree.
This covers nodes generated by Xen, which are not directly copied
from the host DTB. It is calculated as:
  - Space for /hypervisor node (128 bytes).
  - One terminating reserve map entry (16 bytes).
  - Space for a generated memory node covering all possible reserved
    memory regions (NR_MEM_BANKS * 16 bytes).


> 
>>    * are added (yet) but one terminating reserve map entry (16 bytes) is
>> - * added.
>> + * added. Plus space for an extra memory node to cover all possible reserved
>> + * memory regions (2 addr cells + 2 size cells).
>>    */
>> -#define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry))
>> +#define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry) + \
>> +    (NR_MEM_BANKS * 16))
>>   
>>   unsigned int __init dom0_max_vcpus(void)
>>   {
> 
> ~Michal
> 

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.