[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/hvm/dom0: fix PVH initrd and metadata placement


  • To: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 26 Oct 2023 16:58:31 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YOUWLR/1g7VYCnjpuMUXtn9c0XUy50nA4MNKKLyJP7I=; b=mSepF4po2JNeT2W3qnk7UXMrhNAucbNuXRlafDHio+g1yYr1WtrQtlZWG+jpYxsik2H366AvKPsKbZZphUnUurB2b4iDiPK+xA7DNIr4z5+df03Uw8ytL65WJQAhg6Bgb3wBB1L+Jvuw4NQo3/E8G5P2l2pNTnn3DS442ye9IRys26/wzA37lmMS6rDpEEYC/Ee7ENkbhC8cVZ5ShDNmnYmAri8q6DDjVZD8FO1PzdsXPtf3dyzbRBW1D7Tou0xs248Yktov5+zKku4td+ju3Vp0883BUfSkzGspVLmE2YaVu4KRGllOVPKiHpAgbmdoE5d4HLEUFKMPjpKdaZK7qA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ud8LUVhviozYFrw+6DgV91a+7HWO+fppRyx4sV22Nh2WKDvYPIjK84JhlZdTDwa0SrZQaGWZqYpO1CAhFly6ywW2aIBFVhOJ/KJHmgpc0BtahPWD/RKoAc0O1dAGF4G6UZB8RU74IHql5M6VcmJWPlORARaXnOgKAVVNlOqH0kznNGUUEMWAh6mlgEsuwA2EfWC3R5IkzIN8Kefn0PfquDsmhQjwkoxR5WqZo99BQtUAcvCUXJBa/P7W65liwPstqfyW0odt0bvSKIj5eEhdwjuABtRBOU7wN4xAB137qEd263bxnmuo1uJViFAAtG9amzDevdfgIakMqdpC15wnxQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 26 Oct 2023 14:59:13 +0000
  • Ironport-data: A9a23:U3B1VKCDew3DQBVW/xLiw5YqxClBgxIJ4kV8jS/XYbTApDwl3mQFx mFNWmrXa/eLMzT0fd9+aY6y/E4AusPcnIc1QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h yk6QoOdRCzhZiaE/n9BCpC48D8kk/nOH+KgYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs t7pyyHlEAbNNwVcbCRMsMpvlDs15K6p4WtC4ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwwr9IKkJEy NkjdzEwagy/tdrt2JuJc7w57igjBJGD0II3nFhFlGicIdN4BJfJTuPN+MNS2yo2ioZWB/HCa sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxuvTm7IA9ZidABNPLPfdOHX4NNl1uwr WPa5WXpRBodMbRzzBLcqC/33bWQwX6TtIQ6CJC25uVjrX6v1H1LJSM6Fl6fj+OZlRvrMz5YA wlOksY0loAi+UqqR5/nVhK5qXKNvRkBc9NVH6sx7wTl4qje7hudB2MEZiVcc9Fgv8gzLRQ10 neZktWvAiZg2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OKu8j9mwFTSux TmP9XE6n+9K059N0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1 FBspiRUxLpfZX1RvERhmNkwIYw=
  • Ironport-hdrordr: A9a23:z07GA6ED3YKrkHYnpLqE18eALOsnbusQ8zAXPo5KOGVom62j5r iTdZEgvyMc5wxhPU3I9erwWpVoBEmslqKdgrNxAV7BZniDhILAFugLhrcKgQeBJ8SUzJ876U 4PSdkZNDQyNzRHZATBjTVQ3+xO/DBPys6Vuds=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Oct 26, 2023 at 03:09:04PM +0300, Xenia Ragiadakou wrote:
> On 26/10/23 14:41, Jan Beulich wrote:
> > On 26.10.2023 12:31, Andrew Cooper wrote:
> > > On 26/10/2023 9:34 am, Xenia Ragiadakou wrote:
> > > > On 26/10/23 10:35, Jan Beulich wrote:
> > > > > On 26.10.2023 08:45, Xenia Ragiadakou wrote:
> > > > > > Given that start < kernel_end and end > kernel_start, the logic that
> > > > > > determines the best placement for dom0 initrd and metadata, does not
> > > > > > take into account the two cases below:
> > > > > > (1) start > kernel_start && end > kernel_end
> > > > > > (2) start < kernel_start && end < kernel_end
> > > > > > 
> > > > > > In case (1), the evaluation will result in end = kernel_start
> > > > > > i.e. end < start, and will load initrd in the middle of the kernel.
> > > > > > In case (2), the evaluation will result in start = kernel_end
> > > > > > i.e. end < start, and will load initrd at kernel_end, that is out
> > > > > > of the memory region under evaluation.
> > > > > I agree there is a problem if the kernel range overlaps but is not 
> > > > > fully
> > > > > contained in the E820 range under inspection. I'd like to ask though
> > > > > under what conditions that can happen, as it seems suspicious for the
> > > > > kernel range to span multiple E820 ranges.
> > > > We tried to boot Zephyr as pvh dom0 and its load address was under 1MB.
> > > > 
> > > > I know ... that maybe shouldn't have been permitted at all, but
> > > > nevertheless we hit this issue.
> > > 
> > > Zephyr is linked to run at 4k.  That's what the ELF Headers say, and the
> > > entrypoint is not position-independent.
> > Very interesting. What size is their kernel? And, Xenia, can you provide
> > the E820 map that you were finding the collision with?
> 
> Sure.
> 
> Xen-e820 RAM map:
> 
>  [0000000000000000, 000000000009fbff] (usable)
>  [000000000009fc00, 000000000009ffff] (reserved)
>  [00000000000f0000, 00000000000fffff] (reserved)
>  [0000000000100000, 000000007ffdefff] (usable)
>  [000000007ffdf000, 000000007fffffff] (reserved)
>  [00000000b0000000, 00000000bfffffff] (reserved)
>  [00000000fed1c000, 00000000fed1ffff] (reserved)
>  [00000000fffc0000, 00000000ffffffff] (reserved)
>  [0000000100000000, 000000027fffffff] (usable)
> 
> (XEN) ELF: phdr: paddr=0x1000 memsz=0x8000
> (XEN) ELF: phdr: paddr=0x100000 memsz=0x28a90
> (XEN) ELF: phdr: paddr=0x128aa0 memsz=0x7560
> (XEN) ELF: memory: 0x1000 -> 0x130000

Interesting, so far we have accommodated for the program headers
containing physical addresses for a mostly contiguous region, and the
assumption was that it would all fit into a single RAM region.

If we have to support elfs with such scattered loaded regions we
should start using a rangeset or similar in find_memory() in order to
have a clear picture of the available memory ranges suitable to load
the kernel metadata.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.