|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/hvm/dom0: fix PVH initrd and metadata placement
On 30.10.2023 08:37, Xenia Ragiadakou wrote:
> Jan would it be possible to sketch a patch of your suggested solution
> because I 'm afraid I have not fully understood it yet and I won't be
> able to implement it properly for a v2?
While what Roger sent looks to be sufficient, I still thought I'd send
my variant, which I think yields more overall consistent results. Like
Roger's this isn't really tested (beyond making sure it builds).
Jan
From: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
Subject: x86/hvm/dom0: fix PVH initrd and metadata placement
Given that start < kernel_end and end > kernel_start, the logic that
determines the best placement for dom0 initrd and metadata, does not
take into account the two cases below:
(1) start > kernel_start && end > kernel_end
(2) start < kernel_start && end < kernel_end
In case (1), the evaluation will result in end = kernel_start
i.e. end < start, and will load initrd in the middle of the kernel.
In case (2), the evaluation will result in start = kernel_end
i.e. end < start, and will load initrd at kernel_end, that is out
of the memory region under evaluation.
This patch reorganizes the conditionals to include so far unconsidered
cases as well, uniformly returning the lowest available address.
Fixes: 73b47eea2104 ('x86/dom0: improve PVH initrd and metadata placement')
Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
Contrary to my original intentions, with the function preferring lower
addresses (by walking the E820 table forwards), the new cases also
return lowest-possible addresses.
---
v2: Cover further cases of overlap.
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -515,16 +515,23 @@ static paddr_t __init find_memory(
ASSERT(IS_ALIGNED(start, PAGE_SIZE) && IS_ALIGNED(end, PAGE_SIZE));
+ /*
+ * NB: Even better would be to use rangesets to determine a suitable
+ * range, in particular in case a kernel requests multiple heavily
+ * discontiguous regions (which right now we fold all into one big
+ * region).
+ */
if ( end <= kernel_start || start >= kernel_end )
- ; /* No overlap, nothing to do. */
+ {
+ /* No overlap, just check whether the region is large enough. */
+ if ( end - start >= size )
+ return start;
+ }
/* Deal with the kernel already being loaded in the region. */
- else if ( kernel_start - start > end - kernel_end )
- end = kernel_start;
- else
- start = kernel_end;
-
- if ( end - start >= size )
+ else if ( kernel_start > start && kernel_start - start >= size )
return start;
+ else if ( kernel_end < end && end - kernel_end >= size )
+ return kernel_end;
}
return INVALID_PADDR;
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |