[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN v1] xen/Arm: Probe the entry point address of an uImage correctly
On 08/12/2022 19:42, Ayan Kumar Halder wrote: > > On 08/12/2022 16:53, Julien Grall wrote: >> Hi, > Hi, >> >> On 08/12/2022 15:24, Michal Orzel wrote: >>> On 08/12/2022 14:51, Julien Grall wrote: >>>> Caution: This message originated from an External Source. Use proper >>>> caution when opening attachments, clicking links, or responding. >>>> >>>> >>>> Hi, >>>> >>>> Title extra NIT: I have seen it multiple time and so far refrain to say >>>> it. Please use 'arm' rather than 'Arm'. This is for consistency in the >>>> way we name the subsystem in the title. >>>> >>>> On 08/12/2022 12:49, Ayan Kumar Halder wrote: >>>>> Currently, kernel_uimage_probe() does not set info->zimage.start. As a >>>>> result, it contains the default value (ie 0). This causes, >>>>> kernel_zimage_place() to treat the binary (contained within uImage) as >>>>> position independent executable. Thus, it loads it at an incorrect >>>>> address. >>>>> >>>>> The correct approach would be to read "uimage.ep" and set >>>>> info->zimage.start. This will ensure that the binary is loaded at the >>>>> correct address. >>>> >>>> In non-statically allocated setup, a user doesn't know where the memory >>>> for dom0/domU will be allocated. >>>> >>>> So I think this was correct to ignore the address. In fact, I am worry >>>> that... >>>> >>>>> >>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@xxxxxxx> >>>>> --- >>>>> >>>>> I uncovered this issue while loading Zephyr as a dom0less domU with >>>>> Xen on >>>>> R52 FVP. Zephyr builds with static device tree. Thus, the load >>>>> address is >>>>> always fixed. >>>>> >>>>> xen/arch/arm/kernel.c | 2 ++ >>>>> 1 file changed, 2 insertions(+) >>>>> >>>>> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c >>>>> index 2556a45c38..e4e8c67669 100644 >>>>> --- a/xen/arch/arm/kernel.c >>>>> +++ b/xen/arch/arm/kernel.c >>>>> @@ -222,6 +222,8 @@ static int __init kernel_uimage_probe(struct >>>>> kernel_info *info, >>>>> if ( len > size - sizeof(uimage) ) >>>>> return -EINVAL; >>>>> >>>>> + info->zimage.start = be32_to_cpu(uimage.ep); >>>> ... this will now ended up to break anyone that may have set an address >>>> but didn't care where it should be loaded. >>>> >>>> I also understand your use case but now, we have contradictory >>>> approaches. I am not entirely sure how we can solve it. We may have to >>>> break those users (Cc some folks that may use it). But we should figure >>>> out what is the alternative for them. >>>> >>>> If we decide to break those users, then this should be documented in >>>> the >>>> commit message and in docs/misc/arm/booting.txt (which interestingly >>>> didn't mention uImage). >>>> >>> So the first issue with Zephyr is that it does not support zImage >>> protocol for arm32. >>> Volodymyr added support only for Image header for arm64 Zephyr. >>> I guess this is why Ayan, willing to boot it on Xen (arm32), decided >>> to add u-boot header. >> >> If that's the only reason, then I would rather prefer if we go with >> zImage for a few reasons: >> - The zImage protocol has at least some documentation (not perfect) >> of the expected state of the memory/registers when jumping to the image. >> - AFAICT libxl is not (yet) supporting uImage. So this means zephyr >> cannot be loaded on older Xen releases (not great). > > I am exploring for a similar option as Volodymyr ie support zimage > protocol for arm32. > > But for that I need some public documentation that explains the zimage > header format for arm32. > > Refer xen/arch/arm/kernel.c > > #define ZIMAGE32_MAGIC_OFFSET 0x24 > #define ZIMAGE32_START_OFFSET 0x28 > #define ZIMAGE32_END_OFFSET 0x2c > #define ZIMAGE32_HEADER_LEN 0x30 > > #define ZIMAGE32_MAGIC 0x016f2818 > > Is this documented anywhere ? > > I had a look at > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm/booting.rst > > , but there is nothing that explains the header format. > >> >> Note this doesn't mean we should not fix Xen for uImage. >> >>> Now, there is also a question about supporting arm64 uImage kernels. >>> In Xen kernel_zimage_place, >>> we do: >>> #ifdef CONFIG_ARM_64 >>> if ( info->type == DOMAIN_64BIT ) >>> return info->mem.bank[0].start + info->zimage.text_offset; >>> #endif >>> >>> So if we modify the uImage behavior for arm32, we will break >>> consistency with arm64 >>> (we would take uImage entry point address into account for arm32 but >>> not for arm64). >>> At the moment at least they are in sync. >> >> That's a good point. It would be best if the behavior is consistent. > > Currently, kernel_zimage_place() is called for both uImage and zImage. > > Will it be sane if we write a different function for uImage ? > > Something like this ... > > static paddr_t __init kernel_uimage_place(struct kernel_info *info) > > { > > /* Read and return uImage header's load address */ > > return be32_to_cpu(uimage.load); > > } > > This will be consistent across arm32 and arm64 > All of these does not make a lot of sense because we are allocating memory for a domain before probing the kernel image. This means that the load/entry address for a kernel must be within the bank allocated for a domain. So the kernel already needs to know that it is running e.g. as a Xen domU, and add corresponding u-boot header to load us at e.g. GUEST_RAM0_BASE. Otherwise Xen will fail trying to copy the kernel into domain's memory. Whereas for domU it is easy to guess the memory bank, for dom0 it is not. zImage and Image are image protocols, uImage is not. It is just a legacy u-boot header (no requirements \wrt booting,memory,registers, etc.). It can be added on top of anything (even vmlinux or a text file). So I guess this is why Xen states that it supports zImage/Image and does not mention uImage. A header is one thing, the boot requirements are another. Supporting uImage is ok but if we specify that it must be a u-boot header added on top of zImage/Image. > - Ayan > >> >> Cheers, >> ~Michal
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |