[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: E820 memory allocation issue on Threadripper platforms


  • To: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, "Juergen Gross" <jgross@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jason Andryuk <jason.andryuk@xxxxxxx>
  • Date: Tue, 12 Mar 2024 17:07:12 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=invisiblethingslab.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Kmh5UqZ5R5kQnbVbB8JNHjk/o5Wo9fYO0iZUT8Ma4f8=; b=dQTyCN9/ivjyazlkCae6+agr/ru/aFI2gDGUMcmJijeqVuVzQvnpNjZ2QwFf8m5tJdQVAw1MI7hTsQdoZKdpcjFwX7MoipYxYccmrEBeGwVjnOH28HTyOm2Iy4teauxpF8Ha7aOCLK1OGQ6fDMY+XzbliKzib20sg8AoYhBfnxef7uY9DVTSMXPudgM12YA9At/jhnNtog0915NoSGyIc9JeMj4PUtxPqT6S+cw7fbgNNmuthINQwiimjebYtqWsZ12rkG3i+ff0ur0sYsVALCrLrdRLCY1IVrycM72CoyCX8GVu+FrjKn7AHvOAPYXV1fT7Vq9Zd5bPR09debUX7A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fPb7m4Z9is1mUzTYYV78kYLQfOZfn9hkJMkUtd5vDDQcDFLm2pTSeGUCD3JJeEA0QAtaCGQ3Mod3xa5JZom6ytZNg7qbMzHOUKXGvCXG7S8+5GRAGNB5C0NtCCBitqWcavH6IzzhTgO8fFHsy2iQUMZzDFuflDWR8lm6UHmFv12SPYRKuqsKk7GrgnURp+8I92ITX5UhAW7j8tSAjmHHVVMvW2hzg+bAeh85eptSCrTvnSiLNbaHyLwAy8ifTKVgYJP9XEEC0PTaaM/U5tRL8/TEwVgujhIshCDfBy8TUECG9E9mJUC+iUGjB9+Y27J+3maiA8k79G5hoMZ354/lXA==
  • Cc: Patrick Plenefisch <simonpatp@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jason Andryuk <jandryuk@xxxxxxxxx>
  • Delivery-date: Tue, 12 Mar 2024 21:07:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2024-03-10 10:06, Marek Marczykowski-Górecki wrote:
On Fri, Jan 19, 2024 at 02:40:06PM +0100, Marek Marczykowski-Górecki wrote:
On Thu, Jan 18, 2024 at 01:23:56AM -0500, Patrick Plenefisch wrote:
On Wed, Jan 17, 2024 at 3:46 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
On 17.01.2024 07:12, Patrick Plenefisch wrote:
As someone who hasn't built a kernel in over a decade, should I figure
out
how to do a kernel build with CONFIG_PHYSICAL_START=0x2000000 and report
back?

That was largely a suggestion to perhaps allow you to gain some
workable setup. It would be of interest to us largely for completeness.


Typo aside, setting the boot to 2MiB works! It works better for PV

Are there any downsides of running kernel with
CONFIG_PHYSICAL_START=0x200000? I can confirm it fixes the issue on
another affected system, and if there aren't any practical downsides,
I'm tempted to change it the default kernel in Qubes OS.

I have the answer here: CONFIG_PHYSICAL_START=0x200000 breaks booting
Xen in KVM with OVMF. There, the memory map has:
(XEN)  0000000100000-00000007fffff type=7 attr=000000000000000f
(XEN)  0000000800000-0000000807fff type=10 attr=000000000000000f
(XEN)  0000000808000-000000080afff type=7 attr=000000000000000f
(XEN)  000000080b000-000000080bfff type=10 attr=000000000000000f
(XEN)  000000080c000-000000080ffff type=7 attr=000000000000000f
(XEN)  0000000810000-00000008fffff type=10 attr=000000000000000f
(XEN)  0000000900000-00000015fffff type=4 attr=000000000000000f

So, starting at 0x1000000 worked since type=4 (boot service data) is
available at that time already, but with 0x200000 it conflicts with
those AcpiNvs areas around 0x800000.

I'm cc-ing Jason since I see he claimed relevant gitlab issue. This
conflict at least gives easy test environment with console logged to a
file.

Thanks. I actually hacked Xen to reserve memory ranges in the e820 to repro.

I claimed the *PVH* Dom0 gitlab issue.  PV is outside of my scope :(

Regards,
Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.