[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/hvm/dom0: fix PVH initrd and metadata placement


  • To: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 26 Oct 2023 14:35:53 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CovbX6FkWqZncw0b+mTNGWotV201iZbpBaYZ7ZFm/FA=; b=lXBIhmRUAAv00Af54BiaObRNLvsRoZDLT5EWrMidgy/gLVQzB8WsnkG3nOiuRTUFfMKTWA025QS21cbXuS+UuD31PGzPL4cB1PJ3eR1Ma1+gKMYyi1Bv2czd/Yhu7Vr+Q7fhtQ3/3AZiM1EHqFw1J/a+XfL0T+KLOhvPUlqcD3zn6zK0izXwldImIIjG6xr5Sv2I7xeLUiG59Z2OniK0J1UqIMZYL02bHMLUgwDR5mkHLoh3JsLWaTw9DW75GjRCvMTZnhxW9mpdXYbZO2XhjzOUrUdahan4fEJycbgD9uACN7mRv2JrvdxDMxjkNJ/SpUl6LzXqLJumQQV1YueVSA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h5RSs00vJU8XVqFU9SQr4gHcqHjCgciZEHrhIrHu44Y/L7+zKJerNKwCoE+0Ot7u4jw2QkrfXUtC7/hv7k9XnekFv4dR8e5IZhh7m6NWgNDtkd1euEuoDwLSDug6JJZ3UMupZnETlSkZFlwY3D8emScrdzYo3/eZ0wkry7lldh69hiwekiJhducUrxH0T5aWv0vhHjKYBChKA7qrrWAzFC3C2n5DldIeuKtYSwB7P46Wt12mjimzOvutRUeBMKyutTYUPMNqNAVzALHNf68Ay2aIuAFuyP4hx98Em+xJwWEUPgCL90UPQHKybkTYQFtJyRBo6TL/1OBZpxp/dJQFAQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Delivery-date: Thu, 26 Oct 2023 12:36:02 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 26.10.2023 14:09, Xenia Ragiadakou wrote:
> On 26/10/23 14:41, Jan Beulich wrote:
>> On 26.10.2023 12:31, Andrew Cooper wrote:
>>> On 26/10/2023 9:34 am, Xenia Ragiadakou wrote:
>>>> On 26/10/23 10:35, Jan Beulich wrote:
>>>>> On 26.10.2023 08:45, Xenia Ragiadakou wrote:
>>>>>> Given that start < kernel_end and end > kernel_start, the logic that
>>>>>> determines the best placement for dom0 initrd and metadata, does not
>>>>>> take into account the two cases below:
>>>>>> (1) start > kernel_start && end > kernel_end
>>>>>> (2) start < kernel_start && end < kernel_end
>>>>>>
>>>>>> In case (1), the evaluation will result in end = kernel_start
>>>>>> i.e. end < start, and will load initrd in the middle of the kernel.
>>>>>> In case (2), the evaluation will result in start = kernel_end
>>>>>> i.e. end < start, and will load initrd at kernel_end, that is out
>>>>>> of the memory region under evaluation.
>>>>> I agree there is a problem if the kernel range overlaps but is not fully
>>>>> contained in the E820 range under inspection. I'd like to ask though
>>>>> under what conditions that can happen, as it seems suspicious for the
>>>>> kernel range to span multiple E820 ranges.
>>>> We tried to boot Zephyr as pvh dom0 and its load address was under 1MB.
>>>>
>>>> I know ... that maybe shouldn't have been permitted at all, but
>>>> nevertheless we hit this issue.
>>>
>>> Zephyr is linked to run at 4k.  That's what the ELF Headers say, and the
>>> entrypoint is not position-independent.
>> Very interesting. What size is their kernel? And, Xenia, can you provide
>> the E820 map that you were finding the collision with?
> 
> Sure.
> 
> Xen-e820 RAM map:
> 
>   [0000000000000000, 000000000009fbff] (usable)
>   [000000000009fc00, 000000000009ffff] (reserved)
>   [00000000000f0000, 00000000000fffff] (reserved)
>   [0000000000100000, 000000007ffdefff] (usable)
>   [000000007ffdf000, 000000007fffffff] (reserved)
>   [00000000b0000000, 00000000bfffffff] (reserved)
>   [00000000fed1c000, 00000000fed1ffff] (reserved)
>   [00000000fffc0000, 00000000ffffffff] (reserved)
>   [0000000100000000, 000000027fffffff] (usable)
> 
> (XEN) ELF: phdr: paddr=0x1000 memsz=0x8000
> (XEN) ELF: phdr: paddr=0x100000 memsz=0x28a90
> (XEN) ELF: phdr: paddr=0x128aa0 memsz=0x7560
> (XEN) ELF: memory: 0x1000 -> 0x130000

Oh, so it's not any particular range that crosses any E820 boundaries,
but merely the total range including all holes which does. That
raises the (only somewhat related) question what we would do with a
kernel having a really large hole somewhere.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.